Abstract

Abstract This paper describes the implementation of various algorithms to control the distance between a lead vehicle and a following (ego) vehicle. The ego robot equipped with a monocular camera and a rotating laser sensor (LDS). The monocular camera used for object detection using the Aggregate Channel Features (ACF) detection algorithm. The width of the bounding box generated by the detection algorithm had used to determine the distance between the lead and the following vehicles. Since this research focused on longitudinal autonomy, the data from the rotating laser sensor downsampled from 360 points to 30 points. These sampled points covered the front view of the vehicle. All data points transformed into a planar world coordinate (two-dimensional plane). The outputs of the camera and laser sensor (LDS) were fused to obtain accurate distance measurements for the lead vehicle. Sensor calibration had achieved by comparing sensor data with the ground truth values. Kalman Filter was used to implementing sensor fusion by combining perception data from the monocular camera and LDS for accurate position and velocity estimation. This calibration provided information about the sensor noise and deviation of sensor data from its ground truth values. These values helped to determine the error covariance matrixes of the Kalman filter. For implementation, the Robot Operating System (ROS)-MATLAB platform used to communicate between robot and host Personal Computer (PC). The experiments evaluated the performance of Proportional Control (P), Proportional-Integral Control (PI), and Model Predictive Control (MPC) in maintaining a minimum distance between the vehicles. For the MPC implementation in MATLAB, Model Predictive Control Quadratic Programming (MPCQP) solver was used to get the optimal solution for control output. The results show that the MPC yields faster response times when compared to P control and PI control. These algorithms evaluated during constant velocity and constant acceleration of the lead vehicle. The steady-state errors of P and PI controllers were around 0.1 meters (m) in both scenarios and 0 to 0.2m for constant velocity and 0 to 0.15m for ramp velocity, respectively. And for MPC, steady-state error varied from −0.05m to 0.05m in both the scenarios. This range in steady-state error was due to varying speed of the ego vehicle with time to maintain the minimum relative distance between the robots, and there was a communication delay in the system that also affected the behavior of the controllers. The MPC was more sensitive to communication delays. However, the effect of this communication delay was negligible to P and PI controllers. This sensitivity resulted in different velocity profiles for the ego vehicle in MPC and P or PI controllers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call