Abstract
Environment perception is a critical enabler for automated driving systems since it allows a comprehensive understanding of traffic situations. We propose a method based on an end-to-end convolutional neural network that can reason simultaneously about the location of objects in the image and their orientations on the ground plane. The same set of convolutional layers is used for the different tasks involved, avoiding the repetition of computations over the same image. Experiments on the KITTI dataset show that our method achieves state-of-the-art performances for object detection and viewpoint estimation, and is particularly suitable for the understanding of traffic situations from on-board vision systems.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have