Abstract

The economics of a mineral processing circuit is dependent on the numerous sensors that are critical to optimization and control systems. The sheer volume of sensors results in mines not being able to keep the sensors calibrated. When a sensor goes off calibration, it results in errors which can severely impact plant economics. Normally, these errors are not detected until the error grows and becomes a large (“gross”) error. Undetected errors are not remedied until the next calibration, which on average are a year apart Beamex (2019) [1] across all industries. Classical statistical methods that depend on linear relationships between input and response variables are typically used to find gross errors, but are not effective for highly non-linear and non-stationary operations in mining and mineral processing industries. Calibration of sensors is time-consuming and generally requires physical intervention that results in equipment downtimes causing production losses. Therefore, in situ detection methods are warranted that do not require physical removal of sensors to compare with standard measurements or well-calibrated sensors. Data-mining based techniques and algorithms have high success rate in tackling such problems. This paper presents results from a groundbreaking multi-year big data research project whose goal was to develop data-mining-based original techniques in a multi-sensor environment to identify when sensors start to stray, rather than wait for errors to grow. The carbon stripping (gold) circuit in the Pogo Mine of Alaska was selected for this project. Maintaining strip vessel temperatures at optimum levels is crucial for maximizing gold recoveries at Pogo, and hence the focus was on the temperature sensors of the two strip vessels. An automated algorithm to detect errors in strip vessel temperature sensors was developed that exploited data from multiple sensors in the strip circuit. This algorithm was able to detect with a high success rate when bias errors of magnitude as low as ± 2% were artificially induced into strip vessel temperature data streams. The detection times of about 1 month were much less than the industry average calibration frequency of 1 year. Thus, the algorithm helps detect errors early, without stopping the circuit. The algorithm alarms can serve as triggers when calibrations are done. Experimental methodology along with results, limitations of algorithms, and future research are also presented in this paper.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call