Abstract
Every day, humans make hundreds of decisions, most of which are based on information we gather from our surroundings. The majority of this perception when it comes to driving is visual. Autonomous vehicles that are capable of recognizing items around them and making quick decisions in response to these impulses are self-driving cars. Self-driving cars are the main concern and an emerging subject for every automobile giant in the world. Death due to human failure in a road accident increases with every passing month and with technology assisting humans in every possible field, it is essential to prioritize road safety with it.In a thorough investigation of traffic safety, it was discovered that human error was the sole cause of 57% of accidents and a contributory factor in more than 90% of them. In comparison, only 2.4% of crashes were the result of a mechanical issue, and 4.7% were the result of environmental variables alone. Autonomous cars not only help reduce human errors while driving but also help in reducing driver’s fatigue to a greater extent. This study examines the work of a number of scholars in order to provide a quick summary of how computer vision is being used in autonomous cars today. Previous research has shown the use of deep learning algorithms with the help of LiDAR sensors but they come with the problem of being relied on costly hardware. This paper focuses on the vertical and lateral movement of the car while using a more budget friendly yet effective option using just simple computer vision algorithms like Color space transformation, Canny edge detection and Hough line transformation to make the car detect lane lines and steering the car so that it remains bound to them. Simple computer vision requires less operational hardware which indicates that using cost effective boards like Raspberry Pi and Nvidia Jetson Nano can yield much more powerful results. The paper also highlights the reconstruction of a ready-made remote-controlled car that helps understanding autonomous cars even better.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have