To ensure the accuracy and reliability of Advanced Driver Assistance Systems (ADAS), it is essential to perform offline calibration before the vehicles leave the factory. This paper proposes a method for reconstructing the vehicle coordinate system based on machine vision, which can be applied to the offline calibration of ADAS. Firstly, this study explains the preliminary preparations, such as the selection of feature points and the choice of camera model, combining actual application scenarios and testing requirements. Subsequently, the YOLO model is trained to identify and obtain feature regions, and feature point coordinates are extracted from these regions using template matching and ellipse fitting. Finally, a validation experiment is designed to evaluate the accuracy of this method using metrics such as the vehicle’s lateral and longitudinal offset distances and yaw angle. Experimental results show that, compared to traditional vehicle alignment platforms, this method improves reconstruction accuracy while reducing costs.
Read full abstract