Object detection for automotive radar point clouds \u2013 a comparison

  • Abstract
  • Highlights & Summary
  • PDF
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Automotive radar perception is an integral part of automated driving systems. Radar sensors benefit from their excellent robustness against adverse weather conditions such as snow, fog, or heavy rain. Despite the fact that machine-learning-based object detection is traditionally a camera-based domain, vast progress has been made for lidar sensors, and radar is also catching up. Recently, several new techniques for using machine learning algorithms towards the correct detection and classification of moving road users in automotive radar data have been introduced. However, most of them have not been compared to other methods or require next generation radar sensors which are far more advanced than current conventional automotive sensors. This article makes a thorough comparison of existing and novel radar object detection algorithms with some of the most successful candidates from the image and lidar domain. All experiments are conducted using a conventional automotive radar system. In addition to introducing all architectures, special attention is paid to the necessary point cloud preprocessing for all methods. By assessing all methods on a large and open real world data set, this evaluation provides the first representative algorithm comparison in this domain and outlines future research directions.

Similar Papers
  • Research Article
  • 10.1108/sr-08-2024-0702
Cameraless sensor fusion: developing a cost-effective driver assistance system using radar and ultrasonic sensor
  • Dec 19, 2024
  • Sensor Review
  • Sasikumar S + 3 more

PurposeThis paper aims to develop a cost-effective, camera-less advanced driver assistance system (ADAS) for electric vehicles. It will use sensor fusion of ultrasonic and radar sensors to implement adaptive cruise control (ACC), blind spot detection (BSD) and reverse parking (RP).Design/methodology/approachThe system was tested on an electric vehicle test bench, using strategically placed ultrasonic and radar sensors. Sensor fusion enabled accurate object detection and distance measurement. The system’s performance was evaluated through simulated obstacle scenarios, with responses monitored via a graphical user interface. Sensor and GPS data were transmitted to the cloud for potential vehicle-to-vehicle communication.FindingsThe sensor fusion approach effectively supported ACC, BSD and RP functions, demonstrating accuracy in obstacle detection, speed adjustment and emergency braking. The real-time system visualization confirmed reliability across various scenarios and cloud integration showed promise for future communication enhancements.Research limitations/implicationsUltrasonic and radar sensors have limited range and accuracy compared to cameras. Ultrasonic sensors are less effective at longer distances and in adverse weather conditions, whereas radar can face challenges in detecting small or stationary objects. Sensor performance can be affected by environmental factors such as rain, fog or snow, which may reduce the effectiveness of both ultrasonic and radar sensors. Sensor performance can be affected by environmental factors such as rain, fog or snow, which may reduce the effectiveness of both ultrasonic and radar sensors.Practical implicationsImproved obstacle detection and collision avoidance contribute to overall vehicle safety. Drivers benefit from advanced features like ACC, BSD and RP without the high cost of traditional camera-based systems. The use of ultrasonic and radar sensors makes advanced driver assistance features more affordable, allowing broader adoption across various vehicle segments, including budget-friendly and mid-range models. The system’s responsiveness and obstacle detection capabilities can lead to more efficient driving, reducing the likelihood of accidents and improving traffic flow.Social implicationsEnhanced safety features such as ACC, BSD and RP contribute to reducing traffic accidents and injuries. By making advanced driver assistance features more affordable, the system improves vehicle safety for a broader range of drivers, including those in lower-income brackets. The introduction of such systems can raise public awareness about the benefits of ADAS technologies and their role in enhancing road safety.Originality/valueThis study introduces a novel ADAS system that eliminates the need for cameras by leveraging the strengths of radar and ultrasonic sensors. The approach offers a practical and innovative solution for enhancing vehicle safety at a reduced cost.

  • Research Article
  • Cite Count Icon 1
  • 10.1016/j.imavis.2024.105035
Localization-aware logit mimicking for object detection in adverse weather conditions
  • Apr 23, 2024
  • Image and Vision Computing
  • Peiyun Luo + 4 more

Localization-aware logit mimicking for object detection in adverse weather conditions

  • Book Chapter
  • Cite Count Icon 7
  • 10.1093/oso/9780198538509.003.0013
A Comparative Study of Classification Algorithms: Statistical, Machine Learning and Neural Network
  • Aug 25, 1994
  • R D King + 3 more

The aim of the Stat Log project is to compare the performance of statistical, machine learning, and neural network algorithms, on large real world problems. This paper describes the completed work on classification in the Stat Log project. Classification is here defined to be the problem, given a set of multivariate data with assigned classes, of estimating the probability from a set of attributes describing a new example sampled from the same source that it has a pre-defined class. We gathered together a representative collection of algorithms from statistics (Naive Bayes, K-nearest Neighbour, Kernel density, Linear discriminant, Quadratic discriminant, Logistic regression, Projection pursuit, Bayesian networks), machine learning (CART, C4.5, NewID, AC2, CAL5, CN2, ITrule —only propositional symbolic algorithms were considered), and neural networks (Backpropagation, Radial basis functions, Kohonen). We then applied these algorithms to eight large real world classification problems: four from image analysis, two from medicine, and one each from engineering and finance. Our results are still provisional, but we can draw a number of tentative conclusions about the applicability of particular algorithms to particular database types. For example: we found that K-nearest Neighbour can perform well on complex image analysis problems if the attributes are properly scaled, but it is very slow; machine learning algorithms are very fast and robust to non-Normal features of databases, but may be out-performed if particular distribution assumptions hold. We additionally found that many classification algorithms need to be extended to deal better with cost functions (problems where the classes have an ordered relationship are a special case of this).

  • Book Chapter
  • Cite Count Icon 3
  • 10.1007/978-981-15-0802-8_147
Travel behavior change patterns under adverse weather conditions - A case study from Ho Chi Minh City (HCMC), Vietnam
  • Oct 11, 2019
  • Anh Tuan Vu + 1 more

In major cities, road flooding caused by heavy rain and/or high tidal rise frequently happen and cause road traffic congestions and accidents. To help design effective traffic management measures for HCMC, this paper focuses on a survey of the people’s travel behavior changes under the adverse weather conditions. A revealed adaptation interview survey was conducted on 400 road users in 2018. Typical patterns of travel behavior changes and influential factors were analyzed based on the surveyed data and by using the Pearson Chi-square Independence Test. While trip cancelation, delayed departure, waiting for resuming of trip, route change, and destination change are significant, mode change is very modest. A road flood causes the changes more strongly than a heavy rain. Influential factors to such changes are trip characteristics, including trip purpose, trip length and frequency, and personal characteristics. The results would be input data for travel demand forecasting model in adverse weather conditions, and then helpful in formulating traffic management strategies to mitigate the negative traffic impacts of urban floods and heavy rains.

  • Research Article
  • Cite Count Icon 10
  • 10.3390/app14135841
Improving YOLO Detection Performance of Autonomous Vehicles in Adverse Weather Conditions Using Metaheuristic Algorithms
  • Jul 4, 2024
  • Applied Sciences
  • İbrahim Özcan + 2 more

Despite the rapid advances in deep learning (DL) for object detection, existing techniques still face several challenges. In particular, object detection in adverse weather conditions (AWCs) requires complex and computationally costly models to achieve high accuracy rates. Furthermore, the generalization capabilities of these methods struggle to show consistent performance under different conditions. This work focuses on improving object detection using You Only Look Once (YOLO) versions 5, 7, and 9 in AWCs for autonomous vehicles. Although the default values of the hyperparameters are successful for images without AWCs, there is a need to find the optimum values of the hyperparameters in AWCs. Given the many numbers and wide range of hyperparameters, determining them through trial and error is particularly challenging. In this study, the Gray Wolf Optimizer (GWO), Artificial Rabbit Optimizer (ARO), and Chimpanzee Leader Selection Optimization (CLEO) are independently applied to optimize the hyperparameters of YOLOv5, YOLOv7, and YOLOv9. The results show that the preferred method significantly improves the algorithms’ performances for object detection. The overall performance of the YOLO models on the object detection for AWC task increased by 6.146%, by 6.277% for YOLOv7 + CLEO, and by 6.764% for YOLOv9 + GWO.

  • Research Article
  • Cite Count Icon 29
  • 10.1016/j.inffus.2024.102385
PODB: A learning-based polarimetric object detection benchmark for road scenes in adverse weather conditions
  • Mar 26, 2024
  • Information Fusion
  • Zhen Zhu + 3 more

PODB: A learning-based polarimetric object detection benchmark for road scenes in adverse weather conditions

  • Research Article
  • Cite Count Icon 3
  • 10.3390/electronics13245049
YOLOv8-STE: Enhancing Object Detection Performance Under Adverse Weather Conditions with Deep Learning
  • Dec 23, 2024
  • Electronics
  • Zhiyong Jing + 2 more

Object detection powered by deep learning is extensively utilized across diverse sectors, yielding substantial outcomes. However, adverse weather conditions such as rain, snow, and haze interfere with images, leading to a decline in quality and making it extremely challenging for existing methods to detect images captured in such environments. In response to the problem, our research put forth a detection approach grounded in the YOLOv8 model, which we named YOLOv8-STE. Specifically, we introduced a new detection module, ST, on the basis of YOLOv8, which integrates global information step-by-step through window movement while capturing local details. This is particularly important in adverse weather conditions and effectively enhances detection accuracy. Additionally, an EMA mechanism was incorporated into the neck network, which reduced computational burdens through streamlined operations and enriched the original features, making them more hierarchical, thus improving detection stability and generalization. Finally, soft-NMS was used to replace the traditional non-maximum suppression method. Experimental results indicate that our proposed YOLOv8-STE demonstrates excellent performance under adverse weather conditions. Compared to the baseline model YOLOv8, it exhibits superior results on the RTTS dataset, providing a more efficient method for object detection in adverse weather.

  • Research Article
  • 10.3390/s26010304
Robust Object Detection in Adverse Weather Conditions: ECL-YOLOv11 for Automotive Vision Systems
  • Jan 2, 2026
  • Sensors (Basel, Switzerland)
  • Zhaohui Liu + 3 more

The rapid development of intelligent transportation systems and autonomous driving technologies has made visual perception a key component in ensuring safety and improving efficiency in complex traffic environments. As a core task in visual perception, object detection directly affects the reliability of downstream modules such as path planning and decision control. However, adverse weather conditions (e.g., fog, rain, and snow) significantly degrade image quality—causing texture blurring, reduced contrast, and increased noise—which in turn weakens the robustness of traditional detection models and raises potential traffic safety risks. To address this challenge, this paper proposes an enhanced object detection framework, ECL-YOLOv11 (Edge-enhanced, Context-guided, and Lightweight YOLOv11), designed to improve detection accuracy and real-time performance under adverse weather conditions, thereby providing a reliable solution for in-vehicle perception systems. The ECL-YOLOv11 architecture integrates three key modules: (1) a Convolutional Edge-enhancement (CE) module that fuses edge features extracted by Sobel operators with convolutional features to explicitly retain boundary and contour information, thereby alleviating feature degradation and improving localization accuracy under low-visibility conditions; (2) a Context-guided Multi-scale Fusion Network (AENet) that enhances perception of small and distant objects through multi-scale feature integration and context modeling, improving semantic consistency and detection stability in complex scenes; and (3) a Lightweight Shared Convolutional Detection Head (LDHead) that adopts shared convolutions and GroupNorm normalization to optimize computational efficiency, reduce inference latency, and satisfy the real-time requirements of on-board systems. Experimental results show that ECL-YOLOv11 achieves mAP@50 and mAP@50–95 values of 62.7% and 40.5%, respectively, representing improvements of 1.3% and 0.8% over the baseline YOLOv11, while the Precision reaches 73.1%. The model achieves a balanced trade-off between accuracy and inference speed, operating at 237.8 FPS on standard hardware. Ablation studies confirm the independent effectiveness of each proposed module in feature enhancement, multi-scale fusion, and lightweight detection, while their integration further improves overall performance. Qualitative visualizations demonstrate that ECL-YOLOv11 maintains high-confidence detections across varying motion states and adverse weather conditions, avoiding category confusion and missed detections. These results indicate that the proposed framework provides a reliable and adaptable foundation for all-weather perception in autonomous driving systems, ensuring both operational safety and real-time responsiveness.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 1
  • 10.5194/isprs-annals-x-1-w1-2023-657-2023
DEEP LEARNING FOR OBJECT DETECTION USING RADAR DATA
  • Dec 5, 2023
  • ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
  • A M Reda + 2 more

Abstract. Recently, Deep learning algorithms are becoming increasingly instrumental in autonomous driving by identifying and acknowledging road entities to ensure secure navigation and decision-making. Autonomous car datasets play a vital role in developing and evaluating perception systems. Nevertheless, the majority of current datasets are acquired using Light Detection and Ranging (LiDAR) and camera sensors. Utilizing deep neural networks yields remarkable outcomes in object recognition, especially when applied to analyze data from cameras and LiDAR sensors which perform poorly under adverse weather conditions such as rain, fog, and snow due to the sensor wavelengths. This paper aims to evaluate the ability to use RADAR dataset for detecting objects in adverse weather conditions, when LiDAR and Cameras may fail to be effective. This paper presents two experiments for object detection using Faster-RCNN architecture with Resnet-50 backbone and COCO evaluation metrics. Experiment 1 is object detection over only one class, while Experiment 2 is object detection over eight classes. The results show that as expected the average precision (AP) of detecting one class is (47.2) which is better than the results from detecting eight classes (27.4). Comparing my results from experiment 1 to the literature results which achieved an overall AP (45.77), my result was slightly better in accuracy than the literature mainly due to hyper-parameters optimization. The outcomes of object detection and recognition based on RADAR indicate the potential effectiveness of RADAR data in automotive applications particularly in adverse weather conditions, where vision and LiDAR may encounter limitations.

  • Conference Article
  • Cite Count Icon 6
  • 10.1109/iraset57153.2023.10152924
YOLO Algorithms Performance Comparison for Object Detection in Adverse Weather Conditions
  • May 18, 2023
  • Zineb Haimer + 3 more

This paper aimed to evaluate and compare the performance of various versions of the YOLO object detection (OD) algorithm under adverse weather conditions, including rain, fog, snow, and sand, during both day and night. A dataset of images captured in these conditions was used to carry out the performance, in terms of OD, of the previous last five versions of YOLO (You Only Look Once), using a cloud-based platform, named Google Colab Notebook. The obtained results showed that YOLOv7 outperformed the other versions in terms of both speed and accuracy. It was the fastest algorithm, completing OD in under 17.4 milliseconds, and it had the highest detection rate for the most classes in the dataset. These findings suggest that YOLOv7 is the best option for OD in adverse weather conditions and under challenging lighting conditions.

  • Research Article
  • Cite Count Icon 6
  • 10.3390/rs15163992
Adaptive Feature Attention Module for Robust Visual–LiDAR Fusion-Based Object Detection in Adverse Weather Conditions
  • Aug 11, 2023
  • Remote Sensing
  • Taek-Lim Kim + 2 more

Object detection is one of the vital components used for autonomous navigation in dynamic environments. Camera and lidar sensors have been widely used for efficient object detection by mobile robots. However, they suffer from adverse weather conditions in operating environments such as sun, fog, snow, and extreme illumination changes from day to night. The sensor fusion of camera and lidar data helps to enhance the overall performance of an object detection network. However, the diverse distribution of training data makes the efficient learning of the network a challenging task. To address this challenge, we systematically study the existing visual and lidar features based on object detection methods and propose an adaptive feature attention module (AFAM) for robust multisensory data fusion-based object detection in outdoor dynamic environments. Given the camera and lidar features extracted from the intermediate layers of EfficientNet backbones, the AFAM computes the uncertainty among the two modalities and adaptively refines visual and lidar features via attention along the channel and the spatial axis. The AFAM integrated with the EfficientDet performs the adaptive recalibration and fusion of visual lidar features by filtering noise and extracting discriminative features for an object detection network under specific environmental conditions. We evaluate the AFAM on a benchmark dataset exhibiting weather and light variations. The experimental results demonstrate that the AFAM significantly enhances the overall detection accuracy of an object detection network.

  • Research Article
  • 10.63887/jtie.2025.1.5.14
Enhancing the Robustness of Environment Perception Algorithms for Autonomous Vehicles under Complex Weather Conditions
  • Oct 25, 2025
  • Journal of Technology Innovation and Engineering
  • Zhuo Geng + 1 more

Autonomous vehicles rely critically on environment perception algorithms to accurately interpret their surroundings and make reliable navigation decisions. In real-world deployments, perception systems face significant challenges in complex and adverse weather conditions such as heavy rain, snow, dense fog, and rapidly changing illumination. Under these conditions, sensor measurements degrade, object detection becomes unreliable, and scene understanding is hampered by noise, occlusion, and reduced visibility. Increasing the robustness of perception algorithms has therefore become a central research objective in autonomous driving, aiming to preserve operational safety and decision-making accuracy regardless of environmental variability. This paper analyzes the principal limitations of current perception systems in adverse weather, including sensor-specific weaknesses and algorithmic vulnerabilities. It examines recent advances in robustness enhancement strategies, focusing on deep multimodal sensor fusion, data augmentation techniques, domain adaptation, and weather-specific training paradigms. Drawing upon data from LiDAR, radar, and vision systems, as well as published experimental studies, it synthesizes a framework for integrating adaptive algorithms with sensor fusion architectures to withstand environmental perturbations. Finally, future research avenues are proposed to address gaps in cross-season perception consistency, autonomous sensor calibration, and integration of generative AI models to simulate adverse weather for training purposes. The findings contribute to the broader goal of designing perception systems capable of maintaining high reliability in unpredictable outdoor conditions, a critical milestone towards achieving full autonomy in transportation.

  • Research Article
  • 10.1080/23307706.2025.2499526
DRCFusion: dual-stream radar-camera fusion for object detection in adverse weather conditions
  • May 7, 2025
  • Journal of Control and Decision
  • Tianwen Pan + 2 more

Robust object detection is crucial for the safety of autonomous driving. Cameras and radar sensors work in synergy as they provide complementary sources of information. Existing methods mostly use independent dual-branch frameworks to generate Bird's Eye View (BEV) images from radar and camera data, and then perform adaptive modality fusion. However, in adverse weather conditions, merging the features of both sensors presents challenges because cameras are significantly affected by weather. How to dynamically adjust the weight of camera features and guide the generation of camera BEV images has a substantial impact on the final detection results. In this paper, we propose DRCFusion, a radar-camera object detector. Specifically, we design the Semantic Transfer module, which adaptively adjusts the weight of camera image features based on varying weather conditions. To enhance the guidance for the camera in generating BEV features, a Geometric Transfer module is developed, which implicitly corrects the feature distribution of the camera image in the BEV space, utilising radar spatial features as guidance. We evaluate the performance of this method in object detection on the Radiate dataset. Comparisons with baseline methods demonstrate the performance improvements brought by the proposed approach.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 26
  • 10.3390/app14125277
IDP-YOLOV9: Improvement of Object Detection Model in Severe Weather Scenarios from Drone Perspective
  • Jun 18, 2024
  • Applied Sciences
  • Jun Li + 3 more

Despite their proficiency with typical environmental datasets, deep learning-based object detection algorithms struggle when faced with diverse adverse weather conditions. Moreover, existing methods often address single adverse weather scenarios, neglecting situations involving multiple concurrent adverse conditions. To tackle these challenges, we propose an enhanced approach to object detection in power construction sites under various adverse weather conditions, dubbed IDP-YOLOV9. This model leverages a parallel architecture comprising the Image Dehazing and Enhancement Processing (IDP) module and an improved YOLOV9 object detection module. Specifically, for images captured in adverse weather, our approach employs a parallel architecture that includes the Three-Weather Removal Algorithm (TRA) module and the Deep Learning-based Image Enhancement (DLIE) module, which, together, filter multiple weather factors to enhance image quality. Subsequently, we introduce an improved YOLOV9 detection network module that incorporates a three-layer routing attention mechanism for object detection. Experiments demonstrate that the IDP module significantly improves image quality by mitigating the impact of various adverse weather conditions. Compared to traditional single-processing models, our method improves recognition accuracy on complex weather datasets by 6.8% in terms of mean average precision (mAP50).

  • Research Article
  • Cite Count Icon 33
  • 10.1111/cgf.14692
TogetherNet: Bridging Image Restoration and Object Detection Together via Dynamic Enhancement Learning
  • Oct 1, 2022
  • Computer Graphics Forum
  • Yongzhen Wang + 6 more

Adverse weather conditions such as haze, rain, and snow often impair the quality of captured images, causing detection networks trained on normal images to generalize poorly in these scenarios. In this paper, we raise an intriguing question – if the combination of image restoration and object detection, can boost the performance of cutting‐edge detectors in adverse weather conditions. To answer it, we propose an effective yet unified detection paradigm that bridges these two subtasks together via dynamic enhancement learning to discern objects in adverse weather conditions, called TogetherNet. Different from existing efforts that intuitively apply image dehazing/deraining as a pre‐processing step, TogetherNet considers a multi‐task joint learning problem. Following the joint learning scheme, clean features produced by the restoration network can be shared to learn better object detection in the detection network, thus helping TogetherNet enhance the detection capacity in adverse weather conditions. Besides the joint learning architecture, we design a new Dynamic Transformer Feature Enhancement module to improve the feature extraction and representation capabilities of TogetherNet. Extensive experiments on both synthetic and real‐world datasets demonstrate that our TogetherNet outperforms the state‐of‐the‐art detection approaches by a large margin both quantitatively and qualitatively. Source code is available at https://github.com/yz-wang/TogetherNet.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon
Setting-up Chat
Loading Interface