Abstract

In this paper, we develop end-to-end autonomous driving based on a 2D LiDAR sensor and camera sensor that predict the control value of the vehicle from the input data, instead of modeling rule-based autonomous driving. Different from many studies utilizing simulated data, we created an end-to-end autonomous driving algorithm with data obtained from real driving and analyzing the performance of our proposed algorithm. Based on the data obtained from an actual urban driving environment, end-to-end autonomous driving was possible in an informal environment such as a traffic signal by predicting the vehicle control value based on a convolution neural network. In addition, this paper solves the data imbalance problem by eliminating redundant data for each frame during stopping and driving in the driving environment so we can improve the performance of self-driving. Finally, we verified through the activation map how the network predicts the vertical and horizontal control values by recognizing the traffic facilities in the driving environment. Experiments and analysis will be shown to show the validity of the proposed algorithm.

Highlights

  • A self-driving car is a system that recognizes the driving environment, generates the path, and drives the vehicle itself by utilizing environmental awareness sensors such as camera, radar, LiDAR, and GPS

  • Instead of the 3D LiDAR used in Chen et al [13], we proposed an E2E self-driving algorithm based on the convolutional neural network (CNN) that predicts the longitudinal and lateral control values of vehicles by training point cloud data acquired from 2D LiDAR sensors and image

  • The proposed E2E self-driving network consisted of an input structure consisting of two separate branches: a 299 × 299-size image acquired from a camera sensor and an image of 224 × 224 size encoded by a preprocessing algorithm

Read more

Summary

Introduction

A self-driving car is a system that recognizes the driving environment, generates the path, and drives the vehicle itself by utilizing environmental awareness sensors such as camera, radar, LiDAR, and GPS. Longitudinal and lateral controls are performed to reliably drive the target control values of the vehicle determined by recognition and decision [3]. Autonomous driving occurs in the highway environment and in complex urban areas, with various variables such as traffic lights, surrounding vehicles, motorcycles, pedestrians, road structures, and unpredictable conditions, and recognizing various objects in these complex road environments is still a difficult problem [6–9]. Unlike previous studies, which consist of perception, decision, and control, we proposed a convolutional neural network (CNN) that provides the target longitudinal/transverse speed of the vehicle as output with real-world LiDAR and camera data as inputs. Different from many previous end-to-end self-driving, we proposed end-to-end self-driving that complies with road traffic laws by acquiring real-world data in urban areas in complex environments such as traffic lights.

Materials and Methods
End-to-End Self-Driving
Explainable End-to-End Self-Driving System
Experiemental Setup
Convolutional Neural Network for End-to-End Self-Driving
Proposed Network Architecture
Discussion
Findings
CCoonncclluussiioonnss
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call