Abstract

This paper presents a methodology by which kinematic variables of road vehicles can be extracted from unmanned aerial vehicle (UAV) footage. The oriented bounding boxes of the vehicles are identified based on the aerial view of the intersection, and the kinematic variables, such as position, longitudinal velocity, lateral velocity, yaw angle and yaw rate, are determined. The bounding boxes are converted to the perspective of a roadside camera using homography, to generate labeled data sets for training the machine learning-based perception systems of smart intersections. Compared to ordinary GPS data-based technology, the proposed method provides smoother data and more information about the dynamics of the vehicles. In the meantime, it does not require any additional instrumentation on the vehicles. The extracted kinematic variables can be used for motion prediction of road traffic participants and for control of connected automated vehicles (CAVs) in intelligent transportation systems.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.