Abstract
This paper introduces reflex-augmented reinforcement learning (RARL) for operating strategies in automotive electrical energy management. RARL makes it possible to overcome the limitations of rule-based decision systems (RBDS) and to face the increasing complexity in a vehicle's electrical energy system. We suggest a deep Q-learning-based RARL approach for operating strategies determining the behavior of the electrical energy system. This also provides a general approach to realize reinforcement learning in cybernetic management systems for safety-critical applications. In a simulation-based study of more than 50 hours of driving with an extensive model of a vehicular electrical energy system, we show that RARL-based operating strategies fulfill the major requirements of a real vehicle. Compared to an RBDS, RARL requires less effort to design an operating strategy of this level of performance. Furthermore, we evaluate different variants of the biologically-inspired reflex of RARL enabling the application in safety-critical systems. Finally, we do not only provide an approach to replace the RBDS, but we also suggest that RARL is a key to integrate further sources of information into decision-making to enhance electrical energy management.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.