Abstract
Extended range electric vehicles (EREVs) operate both as an electric vehicle (EV) and as a hybrid electric vehicle (HEV). As a hybrid, the on-board range extender (REx) system provides additional energy to increase the feasible driving range. In this paper, we evaluate an experimental research EREV based on the 2016 Chevrolet Camaro platform for optimal energy management control. We use model-in-loop and software-in-loop environments to validate the data-driven power loss model of the research vehicle. A discussion on the limitations of conventional energy management control algorithms is presented. We then propose our algorithm derived from adaptive real-time dynamic programming (ARTDP) with a distance constraint for energy consumption optimization. To achieve a near real-time functionality, the algorithm recomputes optimal parameters by monitoring the energy storage system’s (ESS) state of charge deviations from the previously computed optimal trajectory. The proposed algorithm is adaptable to variability resulting from driving behavior or system limitations while maintaining the target driving range. The net energy consumption evaluation shows a maximum improvement of 9.8% over the conventional charge depleting/charge sustaining (CD/CS) algorithm used in EREVs. Thus, our proposed algorithm shows adaptability and fault tolerance while being close to the global optimal solution.
Highlights
Hybrid electric vehicles (HEVs) and electric vehicles (EVs) are worthy alternatives to the conventional, gasoline-only powered vehicles
The net energy consumption evaluation shows a maximum improvement of 9.8% over the conventional charge depleting/charge sustaining (CD/CS) algorithm used in Extended range electric vehicles (EREVs)
We focus on an extended range electric vehicle (EREV) for energy management optimization research
Summary
Hybrid electric vehicles (HEVs) and electric vehicles (EVs) are worthy alternatives to the conventional, gasoline-only powered vehicles. A distinct CD and CS operating mode is the conventional EREV energy management approach Implementations of this strategy with a rule-based (RB) algorithm are well documented in the literature [6,7]. For real-time online implementation, a combination of Markov decision process (MDP) and stochastic dynamic programming (SDP) has been implemented and validated in the model-in-loop (MiL) environment The outcome of this approach showed ∼24% improvement over a simple RB strategy but was ∼4%. The solution needs to be computationally inexpensive relative to more sophisticated techniques like machine learning To meet these requirements we propose a real-time online implementable energy management algorithm based on the adaptive real-time dynamic programming (ARTDP) approach.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have