Abstract

Estimation and diagnostics of system states and parameters is ubiquitous in industrial applications. Estimation is often performed using input and output data, and the quality of input excitation has critical impact on the accuracy of the results. Therefore, optimal input excitation design has been receiving increasing research attention. Previously, input design is formulated as an optimization problem to find a sequence of excitation, which maximizes a certain criterion associated with estimation accuracy, e.g., the information content of the data. However, the practice suffers from several major drawbacks, including the susceptibility to uncertainty (especially that in target parameter) and tractability of solution. In this research, a reinforcement learning (RL) framework is proposed as a new approach for input design. We envision the input generation procedure as a Markov Decision Process, and leverage reinforcement learning to learn an optimal policy for generating the input excitation. The new approach improves the robustness of the generated input sequence through the feedback mechanism of the policy, and tractability through the learning mechanism of RL. The methodology is applied to optimal excitation design for estimating critical lithium-ion battery electrochemical parameters in simulation and experiments. Results show that the new RL-based framework significantly outperforms the conventional direct optimization approach (with one order-of-magnitude higher information level) under the presence of uncertainty in the target parameter for estimation, and achieves substantially smaller estimation error compared with other profiles in experiment. The obtained RL policy could be used for battery health diagnostics and testing of second-life batteries for repurposing applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call