Abstract

Event logs recorded during the execution of business processes provide a valuable starting point for operational monitoring, analysis, and improvement. Specifically, measures that quantify any deviation between the recorded operations and organizational goals enable the identification of operational issues. The data to compute such process-specific measures, commonly referred to as process performance indicators (PPIs), may contain personal data of individuals, though, which implies an inevitable risk of privacy intrusion that must be addressed.In this article, we target the privacy-aware computation of process performance indicators. To this end, we adopt tree-based definitions of PPIs according to the well-established PPINOT meta-model. For such a PPI, we design data release mechanisms for the functions in a PPI tree. Using a probabilistic formulation of the expected result of a privatized PPI, we further show how to determine the combination of release mechanisms that inflicts the least loss in utility. Moreover, given a set of PPIs, we provide an algorithmic framework to manage an inherent trade-off: Privatization may strive for maximal utility of each single PPI or for maximal reuse of privatized functions among all PPIs to use a privacy budget most effectively. Results from experiments with synthetic as well as real-world data indicate the general feasibility of privacy-aware PPIs and shed light on the trade-offs once a set of them is considered.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call