Modern industrial units collect large amounts of process data based on which advanced process monitoring algorithms continuously assess the status of operations. As an integral part of the development of such algorithms, a reference dataset representative of normal operating conditions is required to evaluate the stability of the process and, after confirming that it is stable, to calibrate a monitoring procedure, i.e., estimate the reference model and set the control limits for the monitoring statistics. The basic assumption is that all relevant “common causes” of variation appear well represented in this reference dataset (using the terminology adopted by the founding father of process monitoring, Walter A. Shewhart). Otherwise, false alarms will inevitably occur during the implementation of the monitoring scheme. However, we argue and demonstrate in this article, that this assumption is often not met in modern industrial systems. Therefore, we introduce a new approach based on the rigorous mechanistic modeling of the dominant modes of common cause variation and the use of stochastic computational simulations to enrich the historical dataset with augmented data representing a comprehensive coverage of the actual operational space. We show how to compute the monitoring statistics and set their control limits, as well as to conduct fault diagnosis when an abnormal event is declared. The proposed method, called AGV (Artificial Generation of common cause Variability) is applied to a Surface Mount Technology (SMT) production line of Bosch Car Multimedia, where more than 17 thousand product variables are simultaneously monitored.