Explainable Multi-perspective Business Process Anomaly Detection Method Based on Graph Neural Networks

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon

Anomalies in business processes can lead to significant losses, making timely detection and handling of these anomalies essential for business process management and optimization. Although current methods in business processes might uncover abnormal cases or attributes in logs, they fail to provide adequate explanations for the anomalies detected. To enable reliable detection, a multi-perspective anomaly detection and explanation method for business processes based on graph neural networks is proposed. Firstly, a graph structure is constructed to reveal the dependencies between various attributes. On this basis, a multiple-graph neural network predictor is trained to predict each attribute of the next event separately. Then, according to the probability distribution of the prediction results, the anomaly score is calculated, and the anomalous attributes and cases are identified. In addition, when an anomaly is detected, a relevance score is assigned to the event attributes in the prefix trace. This score explains the rationale for anomaly detection. The experimental results demonstrate the method's efficacy in detecting anomalies in business processes, providing practical explanations, and enhancing the transparency and credibility of the model.

Similar Papers
  • Research Article
  • Cite Count Icon 15
  • 10.1016/j.is.2024.102405
GAMA: A multi-graph-based anomaly detection framework for business processes via graph neural networks
  • May 19, 2024
  • Information Systems
  • Wei Guan + 3 more

Anomalies in business processes are inevitable for various reasons such as system failures and operator errors. Detecting anomalies is important for the management and optimization of business processes. However, prevailing anomaly detection approaches often fail to capture crucial structural information about the underlying process. To address this, we propose a multi-Graph based Anomaly detection fraMework for business processes via grAph neural networks, named GAMA. GAMA makes use of structural process information and attribute information in a more integrated way. In GAMA, multiple graphs are applied to model a trace in which each attribute is modeled as a separate graph. In particular, the graph constructed for the special attribute activity reflects the control flow. Then GAMA employs a multi-graph encoder and a multi-sequence decoder on multiple graphs to detect anomalies in terms of the reconstruction errors. Moreover, three teacher forcing styles are designed to enhance GAMA’s ability to reconstruct normal behaviors and thus improve detection performance. We conduct extensive experiments on both synthetic logs and real-life logs. The experiment results demonstrate that GAMA outperforms state-of-the-art methods for both trace-level and attribute-level anomaly detection.

  • Research Article
  • 10.35774/econa2023.04.253
The influence of corporate management on the optimization of business processes
  • Jan 1, 2023
  • Economic Analysis
  • Ihor Miroshnychenko + 1 more

Cite as: Miroshnychenko I., and Bradul, O. (2023). The influence of corporate management on the optimization of business processes. Economic analysis, 33 (4), 253-260. DOI: https://doi.org/10.35774/econa2023.04.253 The relevance of the study is due to the fact that today the economic situation in Ukraine in connection with the military aggression of a neighboring country has acquired a rather critical state, as a result of which many domestic corporations are losing production ties, key suppliers and sales markets, reducing business performance indicators processes. Therefore, the main task of corporate management is to find ways to optimize business processes in the existing conditions and direct all the potential of the corporation to maintain and preserve the business. The purpose of the study is to determine the specifics of the impact of corporate governance on the optimization of the enterprise's business processes. The object of the research is cooperative management and its influence on the optimization of the corporation's business processes. The theoretical analysis within the scope of this study was carried out on the basis of the methods of analysis, systematization, generalization and comparison of the theoretical provisions of various researchers regarding the definition of the essence and content of corporate management, as well as its influence on the optimization of the enterprise's business processes. Based on the results of a critical review of scientific works on the essence of corporate governance, the following scientific approaches to its interpretation were distinguished: classical, managerial, regulatory, controlling, strategic, effective, shareholder stakeholder. The author's vision of a complex system of interaction of such approaches based on their systematization into three groups (interest, management, effect) is proposed. At the same time, interest-group approaches are distinguished by a greater attention of scientists to the issues of satisfying the interests of participants in corporate relations, management - an emphasis on management functions and tasks of corporate management, and the effect - on obtaining the desired results from such management in a strategic perspective. The definition of corporate governance is proposed as a system of managing corporate relations in an organization that functions in order to realize its strategic goals by ensuring the effectiveness of the mechanism for making effective decisions based on the regulation and control of corporate rights and monitoring the results of activities to ensure the balance of interests of participants in corporate relations. The content of the concept of "optimization of business processes" in the environment of corporate management is proposed, which is based on the key role of corporate management, which develops a strategy aimed at the result.

  • Research Article
  • Cite Count Icon 12
  • 10.1587/transcom.e93.b.328
Evaluation of Anomaly Detection Method Based on Pattern Recognition
  • Jan 1, 2010
  • IEICE Transactions on Communications
  • Romain Fontugne + 2 more

The number of threats on the Internet is rapidly increasing, and anomaly detection has become of increasing importance. High-speed backbone traffic is particularly degraded, but their analysis is a complicated task due to the amount of data, the lack of payload data, the asymmetric routing and the use of sampling techniques. Most anomaly detection schemes focus on the statistical properties of network traffic and highlight anomalous traffic through their singularities. In this paper, we concentrate on unusual traffic distributions, which are easily identifiable in temporal-spatial space (e.g., time/address or port). We present an anomaly detection method that uses a pattern recognition technique to identify anomalies in pictures representing traffic. The main advantage of this method is its ability to detect attacks involving mice flows. We evaluate the parameter set and the effectiveness of this approach by analyzing six years of Internet traffic collected from a trans-Pacific link. We show several examples of detected anomalies and compare our results with those of two other methods. The comparison indicates that the only anomalies detected by the pattern-recognition-based method are mainly malicious traffic with a few packets.

  • Research Article
  • Cite Count Icon 2
  • 10.17576/jsm-2017-4611-22
On-line Detection Method for Outliers of Dynamic Instability Measurement Data in Geological Exploration Control Process
  • Nov 30, 2017
  • Sains Malaysiana
  • Fang Liu + 3 more

Considering the characteristics of the vibration data detected by the unstable regulation process in the grinding and grading control system and the shortcomings of the traditional wavelet anomaly detection method, an online anomaly detection method combining autoregressive and wavelet analysis is proposed. By introducing the improved robust AR model, this method can overcome the problem that the time and frequency of traditional anomaly detection using wavelet analysis method cannot be well balanced and ensure the rationality of normal detection of process data. Considering the characteristics of parameter change and dynamic characteristics in the process of grinding and grading, the proposed method has the ability of on-line detection and parameter updating in real time, which ensures the control parameters of time-varying process control system. In order to avoid the problem that the traditional anomaly detection method needs to set the detection threshold, introduce the HMM to analyse the wavelet coefficients and update the HMM parameters online, which can ensure that the HMM can well reflect the distribution of the abnormal value of the process data. Through the experiment and application, it is proven that the anomaly data detection method proposed in this paper is more suitable for the detection data in the process of unstable regulation.

  • Conference Article
  • Cite Count Icon 5
  • 10.1145/3511808.3557073
Towards an Awareness of Time Series Anomaly Detection Models' Adversarial Vulnerability
  • Oct 17, 2022
  • Shahroz Tariq + 2 more

Time series anomaly detection is extensively studied in statistics, economics, and computer science. Over the years, numerous methods have been proposed for time series anomaly detection using deep learning-based methods. Many of these methods demonstrate state-of-the-art performance on benchmark datasets, giving the false impression that these systems are robust and deployable in many practical and industrial real-world scenarios. In this paper, we demonstrate that the performance of state-of-the-art anomaly detection methods is degraded substantially by adding only small adversarial perturbations to the sensor data. We use different scoring metrics such as prediction errors, anomaly, and classification scores over several public and private datasets ranging from aerospace applications, server machines, to cyber-physical systems in power plants. Under well-known adversarial attacks from Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) methods, we demonstrate that state-of-the-art deep neural networks (DNNs) and graph neural networks (GNNs) methods, which claim to be robust against anomalies and have been possibly integrated in real-life systems, have their performance drop to as low as 0%. To the best of our understanding, we demonstrate, for the first time, the vulnerabilities of anomaly detection systems against adversarial attacks. The overarching goal of this research is to raise awareness towards the adversarial vulnerabilities of time series anomaly detectors.

  • Research Article
  • Cite Count Icon 2
  • 10.18517/ijaseit.14.6.20451
Comprehensive Analysis and Improved Techniques for Anomaly Detection in Time Series Data with Autoencoder Models
  • Dec 19, 2024
  • International Journal on Advanced Science, Engineering and Information Technology
  • Sarvarbek Erniyazov + 3 more

Anomaly detection is critical in various sectors, offering significant advantages by precisely identifying and mitigating system failures and errors, thus preventing severe losses. This study provides a comprehensive comparative anomaly detection analysis through two sophisticated deep learning models: Autoencoder and Long Short-Term Memory (LSTM) Autoencoder, explicitly focusing on temperature and sound time series data. The paper starts with a detailed theoretical foundation, elaborating on both models' mechanics and mathematical formulations. We then advance to the empirical phase, where these models are rigorously trained and tested against a robust dataset. The effectiveness of each model is meticulously assessed through a suite of metrics that gauge their accuracy, sensitivity, and robustness in anomaly detection scenarios. Additionally, we explore the deployment of these models in a real-time environment, where they actively engage in anomaly detection on incoming data streams. The anomalies detected are dynamically displayed on a user-friendly graphical interface, making the results readily accessible and interpretable for users at all levels of technical expertise. Quantitative evaluations of the models are conducted using key performance metrics such as accuracy, precision, recall, and F1-score. Our analysis reveals that the LSTM Autoencoder model excels with an impressive accuracy rate of 99%, while other metrics also affirm its superior performance, marking it as exceptionally effective and reliable. This study highlights the LSTM Autoencoder's advanced anomaly detection capabilities and establishes its superiority over the traditional Autoencoder model in processing complex time series data. The insights gained here are crucial for industries focused on predictive maintenance and quality control, where early anomaly detection is key to maintaining operational efficiency and safety.

  • Research Article
  • Cite Count Icon 9
  • 10.1016/j.neunet.2025.107169
Graph anomaly detection based on hybrid node representation learning.
  • May 1, 2025
  • Neural networks : the official journal of the International Neural Network Society
  • Xiang Wang + 3 more

Graph anomaly detection based on hybrid node representation learning.

  • Research Article
  • Cite Count Icon 113
  • 10.1609/aaai.v36i6.20629
LUNAR: Unifying Local Outlier Detection Methods via Graph Neural Networks
  • Jun 28, 2022
  • Proceedings of the AAAI Conference on Artificial Intelligence
  • Adam Goodge + 3 more

Many well-established anomaly detection methods use the distance of a sample to those in its local neighbourhood: so-called `local outlier methods', such as LOF and DBSCAN. They are popular for their simple principles and strong performance on unstructured, feature-based data that is commonplace in many practical applications. However, they cannot learn to adapt for a particular set of data due to their lack of trainable parameters. In this paper, we begin by unifying local outlier methods by showing that they are particular cases of the more general message passing framework used in graph neural networks. This allows us to introduce learnability into local outlier methods, in the form of a neural network, for greater flexibility and expressivity: specifically, we propose LUNAR, a novel, graph neural network-based anomaly detection method. LUNAR learns to use information from the nearest neighbours of each node in a trainable way to find anomalies. We show that our method performs significantly better than existing local outlier methods, as well as state-of-the-art deep baselines. We also show that the performance of our method is much more robust to different settings of the local neighbourhood size.

  • Book Chapter
  • Cite Count Icon 4
  • 10.1007/978-3-540-48584-1_19
Evolutionary Optimization of Business Process Designs
  • Jan 1, 2007
  • Ashutosh Tiwari + 2 more

Summary. Business process redesign and improvement have become increasingly attractive in the wider area of business process intelligence. Although there are many attempts to establish a qualitative business process redesign framework, there is little work on quantitative business process analysis and optimization. Furthermore, most of the attempts to analyze and optimize a business process are manual without involving a formal automated methodology. Business process optimization can be classified as a scheduling problem, expressed as the selection of alternative activities in the appropriate sequence for the available resources to be transformed and thus satisfy the business process objectives. This chapter provides an overview of the current research about business process analysis and optimization and introduces an evolutionary approach. It demonstrates how a business process design problem can be modeled as a multi-objective optimization problem and solved using existing techniques. An illustrative case study is presented to demonstrate the results obtained through three multi-objective optimization algorithms. It is shown that multi-objective optimization of business processes is a highly constrained problem with fragmented search space. However, the results demonstrate a successful attempt and highlight the directions for future research in the area.

  • Research Article
  • Cite Count Icon 12
  • 10.1007/s00500-017-2679-3
Network anomaly detection based on probabilistic analysis
  • Jun 15, 2017
  • Soft Computing
  • Jinsoo Park + 5 more

In this paper, we provide a detection technology for a common type of network intrusion (traffic flood attack) using an anomaly data detection method based on probabilistic model analysis. Victim’s computers under attack show various symptoms such as degradation of TCP throughput, increase of CPU usage, increase of RTT (Round Trip Time), frequent disconnection to the web sites, and etc. These symptoms can be used as components to comprise k-dimensional feature space of multivariate normal distribution where an anomaly detection method can be applied for the detection of the attack. These features are in general correlated one another. In other words, most of these symptoms are caused by the attack, so they are highly correlated. Thus we choose only a few of these features for the anomaly detection in multivariate normal distribution. We study this technology for those IoT networks prepared to provide u-health services in the future, where stable and consistent network connectivity is extremely important because the connectivity is highly related to the loss of human lives eventually.

  • Conference Article
  • Cite Count Icon 16
  • 10.1109/iecon.2003.1280336
The methodology for business process optimized design
  • Nov 2, 2003
  • Yonghua Zhou + 1 more

Effective business process design is the precondition for successful business process operation in business process reengineering (BPR) or improvement (BPI). This paper develops a systematic optimized design methodology of business process that also outlines the knowledge infrastructure about BPR from strategic, tactical and operational levels with supportive methods corresponding to design phases, such as BPR strategy formation based on quality function deployment (QFD), data envelopment analysis (DEA)-based benchmarking, business process structurized description and structural optimization, project-oriented business process performance optimization, and business process evaluation and decision. BPR is decomposed into business reengineering (BR) and process reengineering (PR) corresponding to business strategy formation and business process planning and control in integrated business process management, respectively. Business process optimization is a collaborative process centered on the operational process optimization jointly considering management and supportive processes optimization.

  • Conference Article
  • Cite Count Icon 12
  • 10.1145/1298306.1298320
Challenging the supremacy of traffic matrices in anomaly detection
  • Oct 24, 2007
  • Augustin Soule + 3 more

Multiple network-wide anomaly detection techniques proposed in the literature define an anomaly as a statistical outlier in aggregated network traffic. The most popular way to aggregate the traffic is as a Traffic Matrix, where the traffic is divided according to its ingress and egress points in the network. However, the reasons for choosing traffic matrices instead of any other formalism have not been studied yet. In this paper we compare three network-driven traffic aggregation formalisms: ingress routers, input links and origin-destination pairs (i.e. traffic matrices). Each formalism is computed on data collected from two research backbones. Then, a network-wide anomaly detection method is applied to each formalism. All anomalies are manually labeled, as a true or false positive. Our results show that the traffic aggregation level has asignificant impact on the number of anomalies detected and on the false positive rate. We show that aggregating by OD pairs is indeed the most appropriate choice for the data sets and the detection method we consider. We correlate our observations with time series statistics in order to explain how aggregation impacts anomaly detection.

  • Research Article
  • Cite Count Icon 81
  • 10.1016/j.neunet.2018.08.010
The Vapnik–Chervonenkis dimension of graph and recursive neural networks
  • Sep 1, 2018
  • Neural Networks
  • Franco Scarselli + 2 more

The Vapnik–Chervonenkis dimension of graph and recursive neural networks

  • Research Article
  • 10.1088/1742-6596/2791/1/012055
Anomaly detection of four-engine aircraft based on Parameter differences
  • Jul 1, 2024
  • Journal of Physics: Conference Series
  • Jiahuan Liu + 2 more

In this paper, a real-time anomaly detection method combining GDN and Gaussian model is proposed for the real-time engine condition monitoring of four-engine aircraft. Firstly, the gas path parameters of the engine are selected and the difference of the gas path parameters is calculated to offset the influence of the external environment on the engine. Thirdly, the sliding time window method is used for real-time prediction, and the model prediction residuals are used for unsupervised anomaly detection. Finally, the Gaussian model is used to prune the anomalies detected by the GDN model to reduce the misjudgment of anomalies and improve the accuracy of anomaly detection. The experimental results show that the proposed method can detect 97.96% of the outliers in the test set and the accuracy reaches 89.47%, which is better than LSTM, AE, GDN, and Gaussian models alone.

  • Research Article
  • Cite Count Icon 1
  • 10.14778/3705829.3705846
Can Graph Reordering Speed Up Graph Neural Network Training? An Experimental Study
  • Oct 1, 2024
  • Proceedings of the VLDB Endowment
  • Nikolai Merkel + 3 more

Graph neural networks (GNNs) are a type of neural network capable of learning on graph-structured data. However, training GNNs on large-scale graphs is challenging due to iterative aggregations of high-dimensional features from neighboring vertices within sparse graph structures combined with neural network operations. The sparsity of graphs frequently results in suboptimal memory access patterns and longer training time. Graph reordering is an optimization strategy aiming to improve the graph data layout. It has shown to be effective to speed up graph analytics workloads, but its effect on the performance of GNN training has not been investigated yet. The generalization of reordering to GNN performance is nontrivial, as multiple aspects must be considered: GNN hyper-parameters such as the number of layers, the number of hidden dimensions, and the feature size used in the GNN model, neural network operations, large intermediate vertex states, and GPU acceleration. In our work, we close this gap by performing an empirical evaluation of 12 reordering strategies in two state-of-the-art GNN systems, PyTorch Geometric and Deep Graph Library. Our results show that graph reordering is effective in reducing training time for CPU- and GPU-based training, respectively. Further, we find that GNN hyper-parameters influence the effectiveness of reordering, that reordering metrics play an important role in selecting a reordering strategy, that lightweight reordering performs better for GPU-based than for CPU-based training, and that invested reordering time can in many cases be amortized.

Save Icon
Up Arrow
Open/Close