Abstract

Process measurements are contaminated by random and/or gross measuring errors, which degenerates performances of data-based strategies for enhancing process performances, such as online optimization and advanced control. Many approaches have been proposed to reduce the influence of measuring errors, among which expectation maximization (EM) is a novel and parameter-free one proposed recently. In this study, we studied the EM approach in detail and argued that the original EM approach is not feasible to rectify measurements contaminated by persistent biases, which is a pitfall of the original EM approach. So, we propose a modified EM approach here to circumvent this pitfall by fixing the standard deviation of random error mode. The modified EM approach was evaluated by several benchmark cases of process data rectification from literatures. The results show advantages of the proposed approach to the original EM in solving efficiency and performance of data rectification.

Highlights

  • With the advancement of smart manufacturing, process measurements play a more and more important role in modern chemical manufacturing plants [1,2,3]

  • The deviation criterion (DC) and probability and deviation criterion (PDC) had the same overall performance (OP) and AVTI except for the bilinear metallurgical grinding (MG) case, where PDC detected a little less bias than DC; whether this was a special case needs to be investigated in the future, since this work focuses on modifying the original expectation maximization (EM) approach for rectifying measurements contaminated by persistent biases

  • We analyze the influence of a persistent bias on the estimated standard deviation of the random error mode for the EM approach and argue that the 3σ rule cannot be used to detect bias under the occurrence of a persistent bias

Read more

Summary

Introduction

With the advancement of smart manufacturing, process measurements play a more and more important role in modern chemical manufacturing plants [1,2,3]. To recover the true values of process variables from the contaminated measurements, many approaches to data rectification, i.e., reducing the random and gross errors simultaneously from the measurements, have been proposed since 1960s [2]. The first way identifies gross errors with a statistical test by assuming random errors follow a normal distribution [10], a procedure of data reconciliation, i.e., solving a constrained least squares problem whose objective is minimizing the difference between the measured values and reconciled values satisfying process models, is carried out to estimate the true values of the measurements not contaminated by gross errors, while the true values of the measurements contaminated by gross errors are treated as unknown parameters to be estimated. The algorithmic parameters, such as critical values of a statistical test, can be chosen with clear statistical meanings, only one gross error can be identified at a time because of the smearing effect of a large-sized gross error, so the approaches of a statistical test must identify gross errors one by one and elegant frameworks must be designed to promise the performance of data rectification [11]

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.