Abstract

ABSTRACT The application of deep reinforcement learning (DRL) techniques in intelligent transportation systems garners significant attention. In this field, reward function design is a crucial factor for DRL performance. Current research predominantly relies on a trial-and-error approach for designing reward functions, lacking mathematical support and necessitating extensive empirical experimentation. Our research uses vehicle velocity control as a case study, build training and test sets, and develop a DRL framework for speed control. This framework examines both single-objective and multi-objective optimization in reward function designs. In single-objective optimization, we introduce “expected optimal velocity” as an optimization objective and analyze how different reward functions affect performance, providing a mathematical perspective on optimizing reward functions. In multi-objective optimization, we propose a reward function design paradigm and validate its effectiveness. Our findings offer a versatile framework and theoretical guidance for developing and optimizing reward functions in DRL, particularly for intelligent transportation systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call