Abstract

This paper proposes an anti-disturbance car-following strategy for attenuating (i) exogenous disturbances from preceding traffic oscillations and (ii) endogenous disturbances in vehicular control systems (e.g., wind gust, ground friction, and rolling resistance). Firstly, it employs a modified robust controller to generate an expert car-following control experience. Subsequently, it imitates the expert behaviors via the behavioral cloning (BC) technique, thereby developing the anti-disturbance ability. Lastly, the obtained policy is optimized using the self-supervised reinforcement learning (RL) approach. The simulation experiments, comprising both training and evaluation phases, are performed via Python. To simulate car-following scenarios, we utilize the ground-truth data from the Next Generation Simulation (NGSIM) datasets. Through recursive interactions with the perturbed car-following environment, self-supervised RL drives stable policy improvement. The proposed anti-disturbance self-supervised RL (ADSSRL) policy presents a smooth and almost monotonously increasing reward curve. Further evaluation of disturbance dampening performance suggests that at least a 44.5% reduction in control efficiency cost and a 10.1% reduction in driving comfort cost are achieved compared with baselines.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call