Abstract
Cooperative Vehicle and Infrastructure System (CVIS) and Autonomous Vehicle (AV) are two mainstream technologies to improve urban traffic efficiency and vehicle safety in the Intelligent Transportation System (ITS). However, there remain significant obstacles that must be overcome before fully unmanned applications are ready for widespread adoption in a transportation system. To achieve fully driverless driving, the perception ability of vehicle should be accurate, fast, continuous, and wide-ranging. In this paper, an interactive perception framework is proposed, which combines the visual perception of AV and information interaction of CVIS. Based on the framework, an interactive perception-based multiple object tracking (IP-MOT) method is presented. IP-MOT can be divided into two parts. First, a Lidar-only multiple object tracking (L-MOT) method obtains the status of surroundings using the voxel cluster algorithm. Second, the preliminary tracking result is fused with the interactive information to generate the trajectories of target vehicles. Two simulation platforms are established to verify the proposed methods: CVIS simulation platform and Virtual Reality (VR) test platform. The L-MOT algorithm is tested on a public dataset and the IP-MOT algorithm is tested on our simulation platform. The results show that the IP-MOT algorithm can improve the accuracy of object tracking as well as expand the vehicle perception range via combination of CVIS and AV.
Highlights
As a highly complex system with a large number of different types of participants, the urban transportation system urgently needs to improve its intelligence level systematically
The motivation of this paper is summarized as three points: First, multiple object real-time tracking is still a fundamental and challenging issue which is very important for vehicle obstacle avoidance and environmental situation prediction; Second, It is a development trend to combine Cooperative Vehicle and Infrastructure System (CVIS)
Seven shape factors are involved in the feature vector, as defined in Eq.5, where np is the number of the points falling into the cluster, nv is the number of the positive voxels, meanp and varp are the mean and variance of the points, centroid is the position of the centroid, meani, vari are the mean and variance of the reflectance intensity
Summary
As a highly complex system with a large number of different types of participants, the urban transportation system urgently needs to improve its intelligence level systematically. The motivation of this paper is summarized as three points: First, multiple object real-time tracking is still a fundamental and challenging issue which is very important for vehicle obstacle avoidance and environmental situation prediction; Second, It is a development trend to combine CVIS and autonomous driving technologies to address the limitations of the vehicle-centric perception; Third, to perform fast, low-cost and flexible test verification to our methods, it is necessary to build a highly realistic simulation test environment. Different from the previous work, Interactive Perception Multiple Objects Tracking (IP-MOT) algorithm can continuously sense the surrounding vehicle position whether in the condition of visual occlusion or communication failure and improve tracking accuracy. Both of the loosely coupled and the tightly coupled positioning algorithm have been introduced in the previous work [48]. We will introduce the multiple object tracking algorithm using Lidar and the fusion algorithm to enhance the perception accuracy using V2V
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.