Speed bump detection on LiDAR point cloud for autonomous vehicles
Abstract Speed bump detection is paramount for ensuring the safe and comfortable operation of autonomous vehicles while complying with traffic regulations. Detecting speed bumps well in advance enables timely brake application, ensuring a smooth travel experience for passengers in autonomous vehicles. These vehicles rely on a range of sensors for perception, including cameras, radar, stereo vision, and light detection and ranging (LiDAR). LiDAR, in particular, stands out for its ability to generate dense point clouds accurately capturing the geometry and depth of surrounding objects, providing unparalleled detail for robust perception systems. This paper introduces a novel technique for speed bump detection leveraging LiDAR data. The method capitalizes on the variance in Z-values between road surfaces and speed bumps, offering promising insights for enhancing road safety and passenger comfort. The proposed method underwent rigorous testing using a dataset collected within the IIT Hyderabad campus and demonstrated effective speed bump detection. With this system, speed bumps could be reliably detected up to a distance of 15 meters at a rate of approximately 18 frames per second. Moreover, the method’s integration potential into autonomous vehicles promises to contribute significantly to a seamless and safe journey for passengers. The successful implementation of this technique underscores its potential to enhance autonomous driving systems, providing vehicles with advanced perception capabilities to navigate complex road environments with heightened safety and comfort. Further research and development in this area hold promise for continued advancements in autonomous vehicle technology, paving the way for a future of safer and more efficient transportation.
- Research Article
2
- 10.3233/jifs-219256
- Mar 31, 2022
- Journal of Intelligent & Fuzzy Systems
A current challenge for autonomous vehicles is the detection of irregularities on road surfaces in order to prevent accidents; in particular, speed bump detection is an important task for safe and comfortable autonomous navigation. There are some techniques that have achieved acceptable speed bump detection under optimal road surface conditions, especially when signs are well-marked. However, in developing countries it is very common to find unmarked speed bumps and existing techniques fail. In this paper a methodology to detect both marked and unmarked speed bumps is proposed, for clearly painted speed bumps we apply local binary patterns technique to extract features from an image dataset. For unmarked speed bump detection, we apply stereo vision where point clouds obtained by the 3D reconstruction are converted to triangular meshes by applying Delaunay triangulation. A selection and extraction of the most relevant features is made to speed bump elevation on surfaces meshes. Results obtained have an important contribution and improve some of the existing techniques since the reconstruction of three-dimensional meshes provides relevant information for the detection of speed bumps by elevations on surfaces even though they are not marked.
- Conference Article
- 10.4271/2020-01-0703
- Apr 14, 2020
<div class="section abstract"><div class="htmlview paragraph">A Light Detection And Ranging (LiDAR) is now becoming an essential sensor for an autonomous vehicle. The LiDAR provides the surrounding environment information of the vehicle in the form of a point cloud. A decision-making system of the autonomous car is able to determine a safe and comfort maneuver by utilizing the detected LiDAR point cloud. The LiDAR points on the cloud are classified as dynamic or static class depending on the movement of the object being detected. If the movement class (dynamic or static) of detected points can be provided by LiDAR, the decision-making system is able to plan the appropriate motion of the autonomous vehicle according to the movement of the object. This paper proposes a real-time process to segment the motion states of LiDAR points. The basic principle of the classification algorithm is to classify the point-wise movement of a target point cloud through the other point clouds and sensor poses. First, a fixed size buffer store the LiDAR point clouds and sensor poses for a constant time window. Second, motion beliefs of the target point cloud against other point clouds and sensor poses in the buffer are estimated, respectively. Each motion belief of the points in the target point cloud is represented by a series of masses of dynamic, static, and unknown based on the evidence theory. Finally, the series of motion belief masses of the target point cloud for the other point clouds and poses are integrated through the Dempster-Shafer combination. The integrated mass value is used to classify the point-wise motion of the target point cloud into the state of dynamic, static, and unknown. The proposed algorithm was quantitatively evaluated through the simulation of LiDAR sensors and surrounding environment. Then, the algorithm was qualitatively validated through the experiments using an autonomous car equipped with LiDAR. The autonomous vehicle was able to perform the 3D point cloud mapping and map-matching localization.</div></div>
- Research Article
6
- 10.3390/info13010018
- Jan 4, 2022
- Information
With the advancement of artificial intelligence, deep learning technology is applied in many fields. The autonomous car system is one of the most important application areas of artificial intelligence. LiDAR (Light Detection and Ranging) is one of the most critical components of self-driving cars. LiDAR can quickly scan the environment to obtain a large amount of high-precision three-dimensional depth information. Self-driving cars use LiDAR to reconstruct the three-dimensional environment. The autonomous car system can identify various situations in the vicinity through the information provided by LiDAR and choose a safer route. This paper is based on Velodyne HDL-64 LiDAR to decode data packets of LiDAR. The decoder we designed converts the information of the original data packet into X, Y, and Z point cloud data so that the autonomous vehicle can use the decoded information to reconstruct the three-dimensional environment and perform object detection and object classification. In order to prove the performance of the proposed LiDAR decoder, we use the standard original packets used for the comparison of experimental data, which are all taken from the Map GMU (George Mason University). The average decoding time of a frame is 7.678 milliseconds. Compared to other methods, the proposed LiDAR decoder has higher decoding speed and efficiency.
- Research Article
52
- 10.1007/s00138-017-0845-3
- May 29, 2017
- Machine Vision and Applications
3D urban maps with semantic labels and metric information are not only essential for the next generation robots such autonomous vehicles and city drones, but also help to visualize and augment local environment in mobile user applications. The machine vision challenge is to generate accurate urban maps from existing data with minimal manual annotation. In this work, we propose a novel methodology that takes GPS registered LiDAR (Light Detection And Ranging) point clouds and street view images as inputs and creates semantic labels for the 3D points clouds using a hybrid of rule-based parsing and learning-based labelling that combine point cloud and photometric features. The rule-based parsing boosts segmentation of simple and large structures such as street surfaces and building facades that span almost 75% of the point cloud data. For more complex structures, such as cars, trees and pedestrians, we adopt boosted decision trees that exploit both structure (LiDAR) and photometric (street view) features. We provide qualitative examples of our methodology in 3D visualization where we construct parametric graphical models from labelled data and in 2D image segmentation where 3D labels are back projected to the street view images. In quantitative evaluation we report classification accuracy and computing times and compare results to competing methods with three popular databases: NAVTEQ True, Paris-Rue-Madame and TLS (terrestrial laser scanned) Velodyne.
- Research Article
16
- 10.3390/app11073018
- Mar 28, 2021
- Applied Sciences
A worldwide increase in the number of vehicles on the road has led to an increase in the frequency of serious traffic accidents, causing loss of life and property. Autonomous vehicles could be part of the solution, but their safe operation is dependent on the onboard LiDAR (light detection and ranging) systems used for the detection of the environment outside the vehicle. Unfortunately, problems with the application of LiDAR in autonomous vehicles remain, for example, the weakening of the echo detection capability in adverse weather conditions. The signal is also affected, even drowned out, by sensory noise outside the vehicles, and the problem can become so severe that the autonomous vehicle cannot move. Clearly, the accuracy of the stereo images sensed by the LiDAR must be improved. In this study, we developed a method to improve the acquisition of LiDAR data in adverse weather by using a combination of a Kalman filter and nearby point cloud denoising. The overall LiDAR framework was tested in experiments in a space 2 m in length and width and 0.6 m high. Normal weather and three kinds of adverse weather conditions (rain, thick smoke, and rain and thick smoke) were simulated. The results show that this system can be used to recover normal weather data from data measured by LiDAR even in adverse weather conditions. The results showed an effective improvement of 10% to 30% in the LiDAR stereo images. This method can be developed and widely applied in the future.
- Conference Article
1
- 10.4271/2023-01-0740
- Apr 11, 2023
<div class="section abstract"><div class="htmlview paragraph">Image segmentation has historically been a technique for analyzing terrain for military autonomous vehicles. One of the weaknesses of image segmentation from camera data is that it lacks depth information, and it can be affected by environment lighting. Light detection and ranging (LiDAR) is an emerging technology in image segmentation that is able to estimate distances to the objects it detects. One advantage of LiDAR is the ability to gather accurate distances regardless of day, night, shadows, or glare. This study examines LiDAR and camera image segmentation fusion to improve an advanced driver-assistance systems (ADAS) algorithm for off-road autonomous military vehicles. The volume of points generated by LiDAR provides the vehicle with distance and spatial data surrounding the vehicle. Processing these point clouds with semantic segmentation is a computationally intensive process requiring fusion of camera and LiDAR data so that the neural network can process depth and image data simultaneously. We create fused depth images by using a projection method from the LiDAR onto the images to create depth images (RGB-Depth). A neural network is trained to segment the fused data from RELLIS-3D, which is a multi-modal data set for off road robotics. This data set contains both LiDAR point clouds and corresponding RGB images for training the neural network. The labels from the data set are grouped as objects, traversable terrain, non-traversable terrain, and sky to balance underrepresented classes. Results on a modified version of DeepLabv3+ with a ResNet-18 backbone achieves an overall accuracy of 93.989 percent.</div></div>
- Research Article
55
- 10.1109/access.2017.2699686
- Jan 1, 2017
- IEEE Access
Light detection and ranging (LIDAR) has become a part and parcel of ongoing research in autonomous vehicles. LIDAR efficiently captures data during day and night alike; yet, data accuracy is affected in altered weather conditions. LIDAR data fusion with sensors, such as color camera, hyperspectral camera, and RADAR, proves to be a viable solution to improve the quality of data and add spectral information. LIDAR 3-D point cloud containing intensity data are transformed to 2-D intensity images for the said purpose. LIDAR produces large point cloud, but, while generating images for limited field of view, data sparsity results in poor quality images. Moreover, 3-D to 2-D data transformation also involves data reduction, which further deteriorates the quality of images. This paper focuses on generating intensity images from LIDAR data using interpolation techniques, including bi-linear, natural neighbor, bi-cubic, kriging, inverse distance, and weighted and nearest neighbor interpolation. The main focus is to test the suitability of interpolation methods for 2-D image generation, and analyze the quality of the generated 2-D image. Image similarity metrics, such as root mean square error, normalized least square error, peak signal-to-noise ratio, correlation, difference entropy, mutual information, and structural similarity index measurement, are utilized for camera and LIDAR image matching, and their ability to compare images from heterogeneous sensors is also analyzed. Generated images can further be used for data fusion purpose. Images generated using LIDAR points have a relevant distance matrix as well, which can be used to find the distance of any given pixel from the image. In addiiton, the accuracy of interpolated distance data is evaluated as well by comparing it with the original distance values of traffic cones placed in front of vehicle. Results show that the inverse distance weighted interpolation outperforms other selected methods in 2-D image quality, and images from nearest neighbor appear brighter subjectively.
- Conference Article
3
- 10.1117/12.2539863
- Aug 14, 2019
The autonomous vehicles are required to perceive the environment to take a correct driving decision. The sensors which have been commonly used by autonomous vehicles are the camera and the Light Detection and Ranging (LiDAR). In this work, we have integrated the LiDAR data with the image captured by the camera to assign the color information to the point cloud which resulted in a 3D model and to assign depth information to the image pixels which resulted in a depth map. The LiDAR data is sparse and the resolution of the image is much greater than that of the LiDAR data. In order to match the resolution of the LiDAR data and image data, we had utilized Gaussian Process Regression (GPR) to interpolate the depth map but it was not able to completely fill the empty locations in the depth map. In this paper, we have proposed a method to interpolate the 2D depth map data to completely fill the empty locations in the depth map. In this study, we have used Velodyne VLP-16 LiDAR and a monocular camera. Our method is based on the covariance matrix in which the depth value assigned to the empty locations in depth map is decided according to the value of covariance function in the covariance matrix. Our method surpassed the GPR in run time and interpolation result. This shows that our approach is fast enough in real-time for autonomous vehicles.
- Research Article
40
- 10.3390/geosciences9070323
- Jul 23, 2019
- Geosciences
Digital elevation model (DEM) has been frequently used for the reduction and management of flood risk. Various classification methods have been developed to extract DEM from point clouds. However, the accuracy and computational efficiency need to be improved. The objectives of this study were as follows: (1) to determine the suitability of a new method to produce DEM from unmanned aerial vehicle (UAV) and light detection and ranging (LiDAR) data, using a raw point cloud classification and ground point filtering based on deep learning and neural networks (NN); (2) to test the convenience of rebalancing datasets for point cloud classification; (3) to evaluate the effect of the land cover class on the algorithm performance and the elevation accuracy; and (4) to assess the usability of the LiDAR and UAV structure from motion (SfM) DEM in flood risk mapping. In this paper, a new method of raw point cloud classification and ground point filtering based on deep learning using NN is proposed and tested on LiDAR and UAV data. The NN was trained on approximately 6 million points from which local and global geometric features and intensity data were extracted. Pixel-by-pixel accuracy assessment and visual inspection confirmed that filtering point clouds based on deep learning using NN is an appropriate technique for ground classification and producing DEM, as for the test and validation areas, both ground and non-ground classes achieved high recall (>0.70) and high precision values (>0.85), which showed that the two classes were well handled by the model. The type of method used for balancing the original dataset did not have a significant influence in the algorithm accuracy, and it was suggested not to use any of them unless the distribution of the generated and real data set will remain the same. Furthermore, the comparisons between true data and LiDAR and a UAV structure from motion (UAV SfM) point clouds were analyzed, as well as the derived DEM. The root mean square error (RMSE) and the mean average error (MAE) of the DEM were 0.25 m and 0.05 m, respectively, for LiDAR data, and 0.59 m and –0.28 m, respectively, for UAV data. For all land cover classes, the UAV DEM overestimated the elevation, whereas the LIDAR DEM underestimated it. The accuracy was not significantly different in the LiDAR DEM for the different vegetation classes, while for the UAV DEM, the RMSE increased with the height of the vegetation class. The comparison of the inundation areas derived from true LiDAR and UAV data for different water levels showed that in all cases, the largest differences were obtained for the lowest water level tested, while they performed best for very high water levels. Overall, the approach presented in this work produced DEM from LiDAR and UAV data with the required accuracy for flood mapping according to European Flood Directive standards. Although LiDAR is the recommended technology for point cloud acquisition, a suitable alternative is also UAV SfM in hilly areas.
- Research Article
- 10.1007/s41651-025-00233-4
- Jul 22, 2025
- Journal of Geovisualization and Spatial Analysis
Accurate detection of speed bumps is essential to ensure comfort and safe navigation in autonomous vehicles. Most existing speed bump detection methods depend on image-based techniques. However, identifying speed bumps in images can be challenging, especially when visibility is reduced due to poor lighting or weather conditions or when the speed bumps are not clearly marked. To address these limitations, this paper proposes a method for detecting speed bumps using point cloud data acquired by mobile laser scanning (MLS). Point clouds allow for detecting speed bumps solely based on their geometry, regardless of visual markings on the speed bumps. The proposed method aims to accurately detect speed bumps while overcoming challenges posed by noisy and irregular real-world data. A feature set of geometric features is created to describe the speed bumps and the surrounding flat road surface. This feature set is used as input for machine learning classifiers, which are fine-tuned and combined into an ensemble model. Elevation differences between the potential speed bumps and the surrounding road surface are analysed during post-processing to discard false positives. Different road segments across Trondheim, Norway, consisting of 89 speed bumps, are used to evaluate the proposed method. Experimental results show that the proposed method achieves a recall of 92.2% and a precision of 89.3%, demonstrating the method’s ability to correctly identify speed bumps in challenging real-world conditions.
- Research Article
2
- 10.6574/jprs.2014.19(1).4
- Nov 1, 2014
LiDAR (Light Detection and Ranging) point clouds are measurements of irregularly distributed points on scanned object surfaces acquired with airborne or terrestrial LiDAR systems. Feature extraction is the key to transform LiDAR data into spatial information. Surface features are dominant in most LiDAR data corresponding to scanned object surfaces. This paper proposes a general method to segment co-surface points. An incremental segmentation strategy is developed for the implementation, which comprises several algorithms and employs various criteria to gradually segment LiDAR point clouds into several levels. There are four operation steps. First, the proximity of point clouds is established as spatial indices defined in an octree-structured voxel space. Second, a connected-component labeling algorithm for voxels is applied for segmenting neighboring points. Third, coplanar points then can be segmented with the octree-based split-and-merge algorithm as plane features. Finally, combining neighboring plane features forms surface features. With respect to each step, processed LiDAR point clouds are segmented into organized points, neighboring point groups, coplanar point groups, and co-surface point groups. The proposed method enables an incremental retrieval and analysis of a large LiDAR dataset. Experiment results demonstrate the effectiveness of the segmentation algorithm in handling both airborne and terrestrial LiDAR data. The end results as well as the intermediate results of the segmentation may be useful for object modeling of different purposes using LiDAR data.
- Research Article
8
- 10.3390/rs15184529
- Sep 14, 2023
- Remote Sensing
Light detection and ranging (LiDAR) is a widely used technology for the acquisition of three-dimensional (3D) information about a wide variety of physical objects and environments. However, before conducting a campaign, a test is typically conducted to assess the potential of the utilized algorithm for information retrieval. It might not be a real campaign but rather a simulation to save time and costs. Here, a multi-platform LiDAR simulation model considering the location, direction, and wavelength of each emitted laser pulse was developed based on the large-scale remote sensing (RS) data and image simulation framework (LESS) model, which is a 3D radiative transfer model for simulating passive optical remote sensing signals using the ray tracing algorithm. The LESS LiDAR simulator took footprint size, returned energy, multiple scattering, and multispectrum LiDAR into account. The waveform and point similarity were assessed with the LiDAR module of the discrete anisotropic radiative transfer (DART) model. Abstract and realistic scenes were designed to assess the simulated LiDAR waveforms and point clouds. A waveform comparison in the abstract scene with the DART LiDAR module showed that the relative error was lower than 1%. In the realistic scene, airborne and terrestrial laser scanning were simulated by LESS and DART LiDAR modules. Their coefficients of determination ranged from 0.9108 to 0.9984. Their mean was 0.9698. The number of discrete returns fitted well and the coefficient of determination was 0.9986. A terrestrial point cloud comparison in the realistic scene showed that the coefficient of determination between the two sets of data could reach 0.9849. The performance of the LESS LiDAR simulator was also compared with the DART LiDAR module and HELIOS++. The results showed that the LESS LiDAR simulator is over three times faster than the DART LiDAR module and HELIOS++ when simulating terrestrial point clouds in a realistic scene. The proposed LiDAR simulator offers two modes for simulating point clouds: single-ray and multi-ray modes. The findings demonstrate that utilizing a single-ray simulation approach can significantly reduce the simulation time, by over 28 times, without substantially affecting the overall point number or ground pointswhen compared to employing multiple rays for simulations. This new LESS model integrating a LiDAR simulator has great potential in terms of simultaneously simulating LiDAR data and optical images based on the same 3D scene and parameters. As a proof of concept, the normalized difference vegetation index (NDVI) results from multispectral images and the vertical profiles from multispectral LiDAR waveforms were simulated and analyzed. The results showed that the proposed LESS LiDAR simulator can fulfill its design goals.
- Conference Article
1
- 10.1115/imece2021-73770
- Nov 1, 2021
Given the significant technological advances over the past few years, autonomous vehicles are gradually entering the industrialization stage. Light detection and ranging (LiDAR) sensors are seeing increased use in autonomous vehicles. However, the final implementation of the technology remains undetermined because major automotive manufacturers have just started selecting providers for data-collection units that can be introduced in commercial vehicles. Autonomous driving tests are, up to now, handled mostly in sunny environments, such as California or Texas. However, the quality of the detection under fog, rain and snow, especially if they are extreme, becomes severely degraded, especially regarding range. In this work the performance of LiDAR sensors under adverse weather conditions and the effects of LiDAR channels on object detection were investigated. Results showed that fog severely affected LiDAR performance. Rain also had a slight effect on performance, but snow did not affect LiDAR performance. Results also showed that both dense fog and heavy rain affected object and operating range detection by LiDAR sensors.
- Research Article
- 10.1520/jte20240363
- Apr 9, 2025
- Journal of Testing and Evaluation
With the advent of technologies to support autonomous vehicles (AVs), there is a proliferation of different AV technologies from a variety of companies and organizations. With this increase in options comes the need to evaluate the operation of AV technologies to ensure safety and accuracy. Of particular note for physical evaluation involves the perception systems of AVs. However, there is a lack of standard methods to physically evaluate the perception system of AVs. A set of test artifacts can be used to compare the performances of perception systems, but the artifacts must be usable with different types of perception sensors. This article presents the development of an artifact that has both undetectable and detectable edge cases for light detection and ranging (LiDAR) and radar sensors. Specifically, different physical properties were investigated to design the proposed artifact with the desired capabilities of achieving detectable and undetectable edge cases under different conditions. With rigorous testing, a final design for the test artifact was completed where its detectable component reflects at least 7.47 times more radio wave energy and results in at least 1.92 times the amount of LiDAR points as compared with the undetectable component. The test artifact was further tested in outdoor conditions in addition to misaligned positions to demonstrate the versatility and potential weakness of the test artifact, respectively. The demonstrated test artifact in this research can therefore be used to compare the performance of different LiDAR and radar models within AV perception systems.
- Research Article
59
- 10.3390/s20041102
- Feb 18, 2020
- Sensors (Basel, Switzerland)
Crop 3D modeling allows site-specific management at different crop stages. In recent years, light detection and ranging (LiDAR) sensors have been widely used for gathering information about plant architecture to extract biophysical parameters for decision-making programs. The study reconstructed vineyard crops using light detection and ranging (LiDAR) technology. Its accuracy and performance were assessed for vineyard crop characterization using distance measurements, aiming to obtain a 3D reconstruction. A LiDAR sensor was installed on-board a mobile platform equipped with an RTK-GNSS receiver for crop 2D scanning. The LiDAR system consisted of a 2D time-of-flight sensor, a gimbal connecting the device to the structure, and an RTK-GPS to record the sensor data position. The LiDAR sensor was facing downwards installed on-board an electric platform. It scans in planes perpendicular to the travel direction. Measurements of distance between the LiDAR and the vineyards had a high spatial resolution, providing high-density 3D point clouds. The 3D point cloud was obtained containing all the points where the laser beam impacted. The fusion of LiDAR impacts and the positions of each associated to the RTK-GPS allowed the creation of the 3D structure. Although point clouds were already filtered, discarding points out of the study area, the branch volume cannot be directly calculated, since it turns into a 3D solid cluster that encloses a volume. To obtain the 3D object surface, and therefore to be able to calculate the volume enclosed by this surface, a suitable alpha shape was generated as an outline that envelops the outer points of the point cloud. The 3D scenes were obtained during the winter season when only branches were present and defoliated. The models were used to extract information related to height and branch volume. These models might be used for automatic pruning or relating this parameter to evaluate the future yield at each location. The 3D map was correlated with ground truth, which was manually determined, pruning the remaining weight. The number of scans by LiDAR influenced the relationship with the actual biomass measurements and had a significant effect on the treatments. A positive linear fit was obtained for the comparison between actual dry biomass and LiDAR volume. The influence of individual treatments was of low significance. The results showed strong correlations with actual values of biomass and volume with R2 = 0.75, and when comparing LiDAR scans with weight, the R2 rose up to 0.85. The obtained values show that this LiDAR technique is also valid for branch reconstruction with great advantages over other types of non-contact ranging sensors, regarding a high sampling resolution and high sampling rates. Even narrow branches were properly detected, which demonstrates the accuracy of the system working on difficult scenarios such as defoliated crops.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.