Abstract

The three-dimensional trajectory data of vehicles have important practical meaning for traffic behavior analysis. To solve the problems of narrow visual angle in single-camera scenes and lack of continuous trajectories in 3D space by current cross-camera trajectory extraction methods, we propose an algorithm of vehicle spatial distribution and 3D trajectory extraction in this paper. First, a panoramic image of a road with spatial information is generated based on camera calibration, which is used to convert cross-camera perspectives into 3D physical space. Then, we choose YOLOv4 to obtain 2D bounding boxes of vehicles in cross-camera scenes. Based on the above information, 3D bounding boxes around vehicles are built with geometric constraints which are used to obtain projection centroids of vehicles. Finally, by calculating the spatial distribution of projection centroids in the panoramic image, 3D trajectories of vehicles are extracted. The experimental results indicate that our algorithm can effectively complete vehicle spatial distribution and 3D trajectory extraction in various traffic scenes, which outperforms other comparison algorithms.

Highlights

  • Vehicle spatial distribution and 3D trajectory extraction is an important sub-task in the field of computer vision

  • The main are as follows: (1) A road space fusion algorithm in cross-camera scenes based on camera calibration is contributions of this paper are as follows: (1) A road space fusion algorithm in cross-camera scenes proposed to generate the panoramic image with physical information in road space, which can be used to based on camera calibration is proposed to generate the panoramic image with physical information convert multiple cross-camera perspectives into continuous 3D physical space

  • (2) A 3D vehicle detection algorithm based on geometric constraints is proposed to which is used to describe vehicle spatial distribution in the panoramic image and to extract 3D trajectories

Read more

Summary

Introduction

Vehicle spatial distribution and 3D trajectory extraction is an important sub-task in the field of computer vision. Methods of vehicle trajectory extraction in the whole space is cross-camera vehicle tracking, which means obtaining continuous vehicle trajectory from images taken by multiple cameras with or without overlapping areas These methods usually contain three essential steps: camera calibration, vehicle detection, and tracking in single-camera scenes and cross-camera vehicle matching. Peng et al [28] proposed a method of multi-camera vehicle detection and tracking in non-overlapping traffic surveillance, using convolutional neural network (CNN) for object detection and feature extraction and homography matrix for displaying vehicle trajectory to satellite map This method can accurately show vehicle trajectory in panoramic map, but these trajectories do not contain physical location in 3D space.

Framework
Camera Calibration Model and Parameter Calculation
Schematic
Unified World Coordinate System and Road Panoramic Image Generation
The road distribution in the imageimage is shown
4: SetGenerate the complete panoramic image
Geometric Constraints
Experiments
BrnoCompSpeed
Section 3.2.
10. Dataset
11. Schematic
Actual Road Cross-Camera Scene
15. Examples of vehicle spatial
16. Comparison
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call