Abstract
A controller area network (CAN) bus carries control signals in manually and autonomously driven vehicles. A temporal analysis of CAN data provides insight into control decisions translated to motion. The motion of a vehicle is planned on the perception of surroundings using inputs of optical sensors with spatial data. The translation of the spatial information in temporal analysis is learned in deep learning techniques that primarily rely on the temporal data for loss computation which is considered insufficient for a fair translation. This work presents a novel loss function that adds prediction of spatial features decoding to the loss of temporal prediction of CAN bus data. A deep network is proposed which uses a transformer encoder to encode images and CAN data and a convolution decoder that generates the spatial features. The experimentation with the nuScenes data set shows promising results with the proposed idea in CAN bus data prediction. Code is available online to use the proposed loss function in more applications. https://github.com/Aasimrafique/AuxililaryVisualLoss.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have