Abstract

This paper presents a robot which is designed to autonomously navigate through a track by detecting lanes and centering itself between them by using a camera. We are proposing basic and easy algorithm for tracking and detecting lanes without considering any cameras' parameters. The car detects lanes through image input from a camera mounted at its front. All computations based on data from camera are handled by Laptop which is carried by car. The car controlling part is carried by ATmega8 microcontroller. Object tracking algorithms based on color, shape is used for tracking lanes. Based on the information retrieved, data/signals will be transferred to the controller, which will control the movement of motor. This project uses OpenCV library for digital image processing under LINUX environment. Keywords - Lane detection, Hough transform, USART, PWM. I. INTRODUCTION Car accidents kill about thousands of people every year, most of these accidents are caused by driver faults (1). Automating driving may help reduce this huge number of human injuries or deaths. One useful technology is lane detection. Different methods are used for lane detection and tracking like B-snake (3) and IPM (Inverse Perspective Mapping) (4). Both these methods have their advantages and disadvantages. This paper presents design of a small prototype AGV (Autonomous Guided Vehicles) which will detect and follow the lanes by using simple machine vision algorithms. The AGV (Autonomous Guided Vehicles) are capable of performing required tasks in defined environments without continuous human guidance. Different robots are autonomous in different ways. The basic idea is to program the robot to respond a certain way. The autonomous robot which follows some defined path known as AGV (Autonomous Guided Vehicles). Generally an AGV requires a definite path like painted tracks, wired tracks and controller. The paper is organized as follows: section II gives a Basic idea of lane detection. Section III gives a detailed description of the approach followed by results in section IV Finally, a conclusion is given in section V

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call