Abstract
The perception of the environment is key for autonomous driving applications. To increase the accuracy of perception in different environmental contexts vehicles can rely on both camera and LiDAR sensors that provide complementary information about the same features. Therefore, a sensor fusion method can improve the detection accuracy by combining the information of both sensors. Recently, many sensor fusion methods have been proposed that rely on deep neural networks that typically require a lot of resources to be executed in real-time. Therefore, we propose a resource efficient sensor fusion approach with a new neural network optimization method called knowledge-based pruning. The general principle is to prune the neural network guided by the location of the knowledge within the network that is unveiled with explainable AI methods. More specifically, in this work we propose a pruning method that uses layer-wise relevance propagation (LRP) to localize the network knowledge. The considered sensor fusion method uses off-the-shelve pretrained networks which we optimize for our application using the LRP pruning method. This can be used as a form of transfer learning as a pretrained model is optimized to be applied for a subset of the tasks it was originally trained for.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.