Abstract

In this study, a millimeter-wave (MMW) radar and an onboard camera are used to develop a sensor fusion algorithm for a forward collision warning system. This study proposed integrating an MMW radar and camera to compensate for the deficiencies caused by relying on a single sensor and to improve frontal object detection rates. Density-based spatial clustering of applications with noise and particle filter algorithms are used in the radar-based object detection system to remove non-object noise and track the target object. Meanwhile, the two-stage vision recognition system can detect and recognize the objects in front of a vehicle. The detected objects include pedestrians, motorcycles, and cars. The spatial alignment uses a radial basis function neural network to learn the conversion relationship between the distance information of the MMW radar and the coordinate information in the image. Then a neural network is utilized for object matching. The sensor with a higher confidence index is selected as the system output. Finally, three kinds of scenario conditions (daytime, nighttime, and rainy-day) were designed to test the performance of the proposed method. The detection rates and the false alarm rates of proposed system were approximately 90.5% and 0.6%, respectively.

Highlights

  • In recent years, the development of advanced driving assist systems (ADAS) has attracted a large amount of research and funds from major car factories and universities

  • This paper extends our earlier vision based research work [16] and proposes a set of MMW radar and camera fusion strategies based on a parallel architecture that can compensate for the failure of a Energies 2019, 12, x single sensor and enhance the system detection rate using the complementary the sensors

  • An MMW radar and a camera were integrated in this study to develop a frontal object detection system based on sensor fusion using parallel architecture

Read more

Summary

Introduction

The development of advanced driving assist systems (ADAS) has attracted a large amount of research and funds from major car factories and universities. Wang et al [15] proposed a tandem sensor fusion of series connection architecture that uses MMW radar to obtain the candidate position of the detected object. The main purpose of using series architecture in sensor fusion is to rapidly determine the candidate area via radar or Lidar and accelerate the image search process Another advantage of using a second layer sensor is to reduce noise interference after verification and comparison. This paper extends our earlier vision based research work [16] and proposes a set of MMW radar and camera fusion strategies based on a parallel architecture that can compensate for the failure of a Energies 2019, 12, x single sensor and enhance the system detection rate using the complementary the sensors.

System
Radar-Based Object Detection
Radar Data Pre-Processing
Particle Filter
State Prediction
Then theasprior probability
Importance Sampling
Resampling
Experimental
Vision-Based
Sensors
Coordinate
Object
Decision Strategy
11. Experimental
Radar-Based Detection Subsystem
15. Detection
Vision
Sensor Fusion System
Findings
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.