Abstract

This article describes the analysis of industrial process data to detect outliers and systematic errors. Data reconciliation is an important step in adjusting mathematical models to plant data. The quality of the data directly affects the quality of adjustment of the model for modeling, simulation, and optimization purposes. To detect these errors in a multivariable system is not an easy task. If the origin of the abnormal values is known, these values can be immediately discarded. On the other hand, if an error or an extreme observation is not clearly justified, the decision whether or not to discard these values must be based on statistical analysis. In this work, in addition to process knowledge, the methodology employed involves an approach based on statistical analysis, first-principle equations, neural network models, and a composite of these. The neural network based approach was used to represent the process in order to classify similar inputs and outputs, i.e., to identify clusters. The elimination of gross errors was performed by the similarity principle or by hypothesis testing for means. The system studied is the Isoprene Production Unit of BRASKEM, the largest Brazilian petrochemical plant. The analysis of the process was undertaken by using a one-year database. The frequency of the data collection of the monitoring variables was 15 minutes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.