Abstract

Deep reinforcement learning (DRL), which excels at solving a wide variety of Atari and board games, is an area of machine learning that combines the deep learning approach and reinforcement learning (RL). However, to the authors’ best knowledge, there seem to be few studies that apply the latest DRL algorithms on real-world powertrain control problems. If there are any, the requirement of classical model-free DRL algorithms typically for a large number of random exploration in order to realize good control performance makes it almost impossible to implement directly on a real plant. Unlike most of the other DRL studies, whose control strategies can only be trained in a simulation environment—especially when a control strategy has to be learned from scratch—in this study, a hybrid end-to-end control strategy combining one of the latest DRL approaches—i.e., a dueling deep Q-network and traditional Proportion Integration Differentiation (PID) controller—is built, assuming no fidelity simulation model exists. Taking the boost control of a diesel engine with a variable geometry turbocharger (VGT) and cooled (exhaust gas recirculation) EGR as an example, under the common driving cycle, the integral absolute error (IAE) values with the proposed algorithm are improved by 20.66% and 9.7% respectively for the control performance and generality index, compared with a fine-tuned PID benchmark. In addition, the proposed method can also improve system adaptiveness by adding another redundant control module. This makes it attractive to real plant control problems whose simulation models do not exist, and whose environment may change over time.

Highlights

  • Turbocharging and boosting are key technologies in the continued drive for improved internal combustion engine efficiency with reduced emissions [1]

  • Develop good transient control behavior by direct interaction with its environment, it takes much time develop good transient control behavior by direct interaction with its environment, it takes much for the to learn experience, and it is and hardly train thetoalgorithm on time foralgorithm the algorithm tofrom learnnofrom no experience, it ispossible hardlyto possible train the directly algorithm a real plant due to its random exploration when a control strategy has to be learned from scratch directly on a real plant due to its random exploration when a control strategy has to be learned from

  • Algorithms typically require a very large number of random exploration before achieving a good control performance; it is hardly possible to apply the algorithm directly on a real plant, and have to rely heavily on a simulation environment, especially when a control strategy has to be learned from scratch

Read more

Summary

Introduction

Turbocharging and boosting are key technologies in the continued drive for improved internal combustion engine efficiency with reduced emissions [1]. For more than a decade, engine boosting has seen widespread adoption by passenger and heavy goods vehicle powertrains in order to increase the specific power and enable the downsizing megatrend [2]. The growing expectations of vehicle performance, including an excellent transient response with high boost levels, have converged within the demand for increased downsizing and higher levels of EGR. The rated power and torque for downsized units are conventionally regained via fixed-geometry turbocharging [4]. The transient behavior of such systems is limited by the usual requirement of a large size turbocharger, especially if a high-end torque is Energies 2019, 12, 3739; doi:10.3390/en12193739 www.mdpi.com/journal/energies. 22of of 15 such systems is limited by the usual requirement of a large size turbocharger, especially if a high-end torque is [5,6].

Variable
Hybrid
Engine
Dueling
Results and Discussion
Control behavior
14. Control behavior between original
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call