Abstract

The perception ability of automated systems such as autonomous cars plays an outstanding role for safe and reliable functionality. With the continuously growing accuracy of deep neural networks for object detection on one side and the investigation of appropriate space representations for object tracking on the other side both essential perception parts received special research attention within the last years. However, early fusion of multiple sensors turns the determination of suitable measurement spaces into a complex and not trivial task. In this paper, we propose the use of a deep multi-modal object detection network for the early fusion of LiDAR and camera data to serve as a measurement source for an extended object tracking algorithm on Lie groups. We develop an extended Kalman filter and model the state space as the direct product Aff(2) × ℝ6 incorporating second- and third-order dynamics. We compare the tracking performance of different measurement space representations-SO(2) × ℝ4, SO(2)2 × ℝ3 and Aff(2)-to evaluate, how our object detection network encapsulates the measurement parameters and the associated uncertainties. With our results, we show that the lowest tracking errors in the case of single object tracking are obtained by representing the measurement space by the affine group. Thus, we assume that our proposed object detection network captures the intrinsic relationships between the measurement parameters, especially between position and orientation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call