Moving object detection (MOD) technology was combined to include detection, tracking and classification which provides information such as the local and global position estimation and velocity from around objects in real time at least 15 fps. To operate an autonomous driving vehicle on real roads, a multi-sensor-based object detection and classification module should carry out simultaneously processing in the autonomous system for safe driving. Additionally, the object detection results must have high-speed processing performance in a limited HW platform for autonomous vehicles. To solve this problem, we used the Redmon in DARKNET-based ( https://pjreddie.com/darknet/yolo ) deep learning method to modify a detector that obtains the local position estimation in real time. The aim of this study was to get the local position information of a moving object by fusing the information from multi-cameras and one RADAR. Thus, we made a fusion server to synchronize and converse the information of multi-objects from multi-sensors on our autonomous vehicle. In this paper, we introduce a method to solve the local position estimation that recognizes the around view which includes the long-, middle- and short-range view. We also describe a method to solve the problem caused by a steep slope and a curving road condition while driving. Additionally, we introduce the results of our proposed MOD-based detection and tracking estimation to achieve a license for autonomous driving in KOREA.
Read full abstract