Abstract

Advanced driver assistance systems and highly automated driving functions require an enhanced frontal perception system. The requirements of a frontal environment perception system cannot be satisfied by either of the existing automotive sensors. A commonly used sensor cluster for these functions consists of a mono-vision smart camera and automotive radar. The sensor fusion is intended to combine the data of these sensors to perform a robust environment perception. Multi-object tracking algorithms have a suitable software architecture for sensor data fusion. Several multi-object tracking algorithms, such as JPDAF or MHT, have good tracking performance; however, the computational requirements of these algorithms are significant according to their combinatorial complexity. The GM-PHD filter is a straightforward algorithm with favorable runtime characteristics that can track an unknown and time-varying number of objects. However, the conventional GM-PHD filter has a poor performance in object cardinality estimation. This paper proposes a method that extends the GM-PHD filter with an object birth model that relies on the sensor detections and a robust object extraction module, including Bayesian estimation of objects’ existence probability to compensate for drawbacks of the conventional algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call