Abstract
Efficiently encoding high definition maps is of great importance for autonomous navigation, thus they are widely used for tasks such as predicting the future behaviour of traffic participants and planning a safe trajectory. Previous methods tackled this problem either by rasterizing the road into a multichannel image, or by sampling the vectorial representation in fixed sized sub-segments (often called lanelets). The latter has become the go-to method due to its efficiency and expressiveness. However, its main limitation is the fact that the points creating a geometrical shape have to be sampled at a fixed spatial dimension, hence not taking full advantage of this representation’s potential. In this work, we address this problem by making 2 additions to the classical architectures used for encoding such a heterogeneous structure. Rather than using a single network to encode a map element, we propose the decomposition of the map attributes such as road lines, traffic signs, road edges, etc. into structure and style features, an approach inspired by the recent progresses in the photo-realistic style transfer domain. The structural features will be encoded by a shared message passing network that treats the most essential positional data without the need of resampling at a fixed resolution, being able to adapt during inference the spatial dimension of the representation based on the initial length and complexity. The style attributes will be encoded separately and will allow for an easier addition of a new type of map element, without the retraining of the whole system. Evaluating the method on various edge prediction and node classification tasks proves that our method has better results than the previously mentioned approaches in both tasks, while having a 53% smaller memory footprint on average when representing a scenario.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.