Abstract

Currently, autonomous driving systems are becoming more and more widespread. A promising area of research is the design of control systems for self-driving cars using multiple sensors for autonomous driving. Data fusion allows to build a more complete and accurate model of the surrounding scene by complementing the data of one modality with the data of another modality. The article describes an approach to driving an unmanned vehicle in the Carla simulation environment based on a neural network model, which receives multimodal data from a camera and lidar. The approach can significantly improve the quality of recognition of surrounding scenes by identifying a hierarchy of features on the inner layers of the neural network by integrating multimodal information using the model of transformers with attention. The output of the neural network is a sequence of points that define further movement by converting them into control actions for the steering wheel, gas and brake.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.