Among available techniques for lateral tracking in autonomous driving, none possesses the capability to learn from past behaviors and progressively reduce lateral errors. In contrast, our proposed iterative learning control (ILC) scheme significantly enhances tracking performance in vision-based classical control, particularly in challenging environmental conditions. Our vision-based control system incorporates real-time semantic segmentation using the ENet model for unconstructed road area extraction. Linear regression estimates steering adjustments, while a visual PID controller maintains the vehicle’s position at the road’s centerline. This control system’s performance varies with lighting conditions, notably in dense shade, where the vehicle tends to deviate from the desired path. To assess ILC’s potential in error reduction over successive trials, we examined various ILC structures and compared them with a pure PID controller. Despite challenges in directly comparing PID and ILC designs due to changing lighting conditions during experiments, ILC consistently reduced tracking errors and improved path alignment with each iteration. Notably, the degree of error reduction became more pronounced with a greater number of learning gains in the ILC design. Our experimental results underscore the overall effectiveness of ILC in tracking the desired path in autonomous driving scenarios, particularly in varying environmental conditions.
Read full abstract