In this paper, we present a novel autonomous vehicle (AV) localization design and its implementation, which we recommend to employ in challenging navigation conditions with a poor quality of the satellite navigation system signals and computer vision images. In the case when the GPS signal becomes unstable, other auxiliary navigation systems, such as computer-vision-based positioning, are employed for more accurate localization and mapping. However, the quality of data obtained from AV’s sensors might be deteriorated by the extreme environmental conditions too, which infinitely leads to the decrease in navigation performance. To verify our computer-vision-based localization system design, we considered the Arctic region use case, which poses additional challenges for the AV’s navigation and might employ artificial visual landmarks for improving the localization quality, which we used for the computer vision training. We further enhanced our data by applying affine transformations to increase its diversity. We selected YOLOv4 image detection architecture for our system design, as it demonstrated the highest performance in our experiments. For the computational platform, we employed a Nvidia Jetson AGX Xavier device, as it is well known and widely used in robotic and AV computer vision, as well as deep learning applications. Our empirical study showed that the proposed computer vision system that was further trained on the dataset enhanced by affine transformations became robust regarding image quality degradation caused by extreme environmental conditions. It was effectively able to detect and recognize images of artificial visual landmarks captured in the extreme Arctic region’s conditions. The developed system can be integrated into vehicle navigation facilities to improve their effectiveness and efficiency and to prevent possible navigation performance deterioration.
Read full abstract