Abstract
Due to the widespread use of unmanned aerial vehicles (UAVs) in remote sensing, there are fully developed techniques for extracting vehicle speed and trajectory data from aerial video, using either a traditional method based on optical features or a deep learning method; however, there are few papers that discuss how to solve the issue of video shaking, and existing vehicle data are rarely linked to lane lines. To address the deficiencies in current research, in this study, we formulated a more reliable method for real traffic data acquisition that outperforms the traditional methods in terms of data accuracy and integrity. First, this method implements the scale-invariant feature transform (SIFT) algorithm to detect, describe, and match local features acquired from high-altitude fixed-point aerial photographs. Second, it applies “you only look once” version 5 (YOLOv5) and deep simple online and real-time tracking (DeepSORT) to detect and track moving vehicles. Next, it leverages the developed Python program to acquire data on vehicle speed and distance (to the marked reference line). The results show that this method achieved over 95% accuracy in speed detection and less than 20 cm tolerance in vehicle trajectory mapping. This method also addresses common problems involving the lack of quality aerial photographic data and accuracy in lane line recognition. Finally, this approach can be used to establish a Frenet coordinate system, which can further decipher driving behaviors and road traffic safety.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.