Energy Management Strategy for a Series Hybrid Electric Vehicle Using Improved Deep Q-network Learning Algorithm with Prioritized Replay
Abstract
On the basis of previous deep Q-network (DQN) based energy management strategy (EMS), two aspects of efforts are explored to improve it for more efficient and stable training and performances in this paper. First, the architecture of original DQN is changed in order to learn separately from the value of current driving/vehicle states and advantages of EMSs under current states; a duplicated network is adopted for Q-value calculation. Second, prioritized replay is introduced for more efficient data utilization during training of DQN based EMS. Simulation results show that the convergence of improved DQN based EMS is faster and higher reward can be achieved compared with original DQN based EMS. Simulation on China typical urban driving cycle for the series hybrid electric vehicle indicates that the fuel economy performance of improved DQN (6.07L/100km) is 8.4% higher than DP based EMS, exceeding original DQN based EMS (6.24L/100km) by about 3%.
Keywords
Energy Management Strategy for a Series Hybrid Electric Vehicle Using Improved Deep Q-network Learning Algorithm with Prioritized Replay
DOI
10.12783/dteees/iceee2018/27794
10.12783/dteees/iceee2018/27794
Refbacks
- There are currently no refbacks.