Abstract

Recent research on deep learning has been applied to a diversity of fields. In particular, numerous studies have been conducted on self-driving vehicles using end-to-end approaches based on images captured by a single camera. End-to-end controls learn the output vectors of output devices directly from the input vectors of available input devices. In other words, an end-to-end approach learns not by analyzing the meaning of input vectors, but by extracting optimal output vectors based on input vectors. Generally, when end-to-end control is applied to self-driving vehicles, the steering wheel and pedals are controlled autonomously by learning from the images captured by a camera. However, high-resolution images captured from a car cannot be directly used as inputs to Convolutional Neural Networks (CNNs) owing to memory limitations; the image size needs to be efficiently reduced. Therefore, it is necessary to extract features from captured images automatically and to generate input images by merging the parts of the images that contain the extracted features. This paper proposes a learning method for end-to-end control that generates input images for CNNs by extracting road parts from input images, identifying the edges of the extracted road parts, and merging the parts of the images that contain the detected edges. In addition, a CNN model for end-to-end control is introduced. Experiments involving the Open Racing Car Simulator (TORCS), a sustainable computing environment for cars, confirmed the effectiveness of the proposed method for self-driving by comparing the accumulated difference in the angle of the steering wheel in the images generated by it with those of resized images containing the entire captured area and cropped images containing only a part of the captured area. The results showed that the proposed method reduced the accumulated difference by 0.839% and 0.850% compared to those yielded by the resized images and cropped images, respectively.

Highlights

  • Research on utilizing sensors, such as radar and light detection and ranging (LiDAR), attached to vehicles has been actively under way to accurately recognize surrounding environments for self-driving vehicles [1]

  • This paper proposed a method to crop input images for self-driving cars using end-to-end control

  • Areas in the images representing lanes were extracted from the images collected during self-driving for use as input to the end-to-end control

Read more

Summary

Introduction

Research on utilizing sensors, such as radar and light detection and ranging (LiDAR), attached to vehicles has been actively under way to accurately recognize surrounding environments for self-driving vehicles [1]. Studies on end-to-end control-based self-driving have been actively conducted to control vehicles using images captured by one or multiple cameras attached to them as input [12,13,14]. This paper proposes a method to generate input images for CNNs. Multiple parts of images featuring both sides of a lane are extracted and merged, by considering the perspective of the car, as input images. Multiple parts of images featuring both sides of a lane are extracted and merged, by considering the perspective of the car, as input images This reduces the learning time needed for self-driving cars through end-to-end controls. Based on the cropped images and the proposed CNN model, the process of the system learning how to control the car is described.

Research on Awareness of Cars
Research on Driving Method of Cars
End-to-End Controls of Cars
Image Cropping Approach for Self-Driving Cars
System Overview
Learning Routes
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.