The current economic design of control charts assumes specific quality distributions, limits parameter choices, and over-relies on historical samples, hindering companies from determining the most economical parameters accurately. Leveraging industrial big data, we propose a data-driven, mixed-integer linear programming model for the economic design of adaptive control charts. Control limits are dynamically designed as a function of features to minimise quality costs. Considering the trade-off between false alarms and penalty costs, we develop three models: a basic model incorporating big data, a model with cost-penalised features, and a model that uses regularisation to manage overfitting. We simulate the model using new performance measures. Our findings demonstrate the economic value of adaptive control limits strategies incorporating feature data compared to benchmarks. We expanded the model to an endogenous sample size and sampling interval framework, further demonstrating the superiority of our approach. We undertook a case study using real-world data from a casting company and revealed that employing our approach culminates in a 24.6% reduction in costs relative to the company's existing quality control protocols. Our approach enables manufacturers to make strategic decisions about quality control by operationalising big data, thereby proving advantageous in reducing quality costs.
Read full abstract