• All Solutions All Solutions Caret
    • Editage

      One platform for all researcher needs

    • Paperpal

      AI-powered academic writing assistant

    • R Discovery

      Your #1 AI companion for literature search

    • Mind the Graph

      AI tool for graphics, illustrations, and artwork

    • Journal finder

      AI-powered journal recommender

    Unlock unlimited use of all AI tools with the Editage Plus membership.

    Explore Editage Plus
  • Support All Solutions Support
    discovery@researcher.life
Discovery Logo
Paper
Search Paper
Cancel
Ask R Discovery Chat PDF
Explore

Feature

  • menu top paper My Feed
  • library Library
  • translate papers linkAsk R Discovery
  • chat pdf header iconChat PDF
  • audio papers link Audio Papers
  • translate papers link Paper Translation
  • chrome extension Chrome Extension

Content Type

  • preprints Preprints
  • conference papers Conference Papers
  • journal articles Journal Articles

More

  • resources areas Research Areas
  • topics Topics
  • resources Resources

Light Detection And Ranging Research Articles

  • Share Topic
  • Share on Facebook
  • Share on Twitter
  • Share on Mail
  • Share on SimilarCopy to clipboard
Follow Topic R Discovery
By following a topic, you will receive articles in your feed and get email alerts on round-ups.
Overview
5894 Articles

Published in last 50 years

Related Topics

  • Light Detection And Ranging Data
  • Light Detection And Ranging Data
  • Light Detection And Ranging Intensity
  • Light Detection And Ranging Intensity
  • Airborne Laser Scanning
  • Airborne Laser Scanning
  • Lidar Data
  • Lidar Data
  • Airborne LiDAR
  • Airborne LiDAR

Articles published on Light Detection And Ranging

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
5783 Search results
Sort by
Recency
LiDAR-derived canopy structure explains 137Cs concentrations in throughfall in Fukushima plantation forest.

LiDAR-derived canopy structure explains 137Cs concentrations in throughfall in Fukushima plantation forest.

Read full abstract
  • Journal IconEnvironmental pollution (Barking, Essex : 1987)
  • Publication Date IconJun 1, 2025
  • Author Icon Yupan Zhang + 4
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Precise and Fast LiDAR via Electrical Asynchronous Sampling

AbstractUsing a laser‐based ranging method for precise environmental 3D sensing, light detection and ranging (LiDAR) has numerous applications in science and industry. However, conventional LiDAR face challenges in simultaneously achieving high ranging precision and fast measurement rates, which limits their applicability in more precise fields, such as aerospace, smart healthcare and beyond. A simple and powerful time‐of‐flight (TOF) measurement method based on a single femtosecond laser is proposed, which is the first demonstration of electrical asynchronous sampling (EAS) for ranging. It exploits the advantages of optical‐frequency‐comb ranging method and overcome the limitations of sampling aliasing and low data‐utilization inherent in traditional approaches. This enables a significant improvement of LiDAR's performance to achieve micrometer‐level precision and megahertz‐regimes update rates over meter‐range on non‐cooperative targets. Specifically, 38.8‐µm Allan deviation is achieved at 1‐MHz update rate and 8.06‐µm Allan deviation after 2‐ms time‐averaging based on a 56.091‐MHz femtosecond fiber laser. This enhancement enables various advanced measurement applications, including metrology monitoring on high‐speed objects, 1‐megapixel/s precise 3D scanning imaging and first‐ever contactless vital sign detection using TOF LiDAR. This LiDAR unlock new possibilities for precise and fast real‐time measurements in diverse fields.

Read full abstract
  • Journal IconLaser & Photonics Reviews
  • Publication Date IconMay 30, 2025
  • Author Icon Lizong Dong + 6
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Quaternary Geology of the Indiana Portion of the Southern Half of the Kankakee 30- x 60-minute Quadrangle

The map of the Quaternary Geology of the Indiana Portion of the Southern Half of the Kankakee 30- x 60-minute Quadrangle displays unconsolidated Pleistocene glacial sediments associated with the Lake Michigan Lobe and Huron-Erie Lobe of the Laurentide Ice Sheet and post-glacial sediments deposited by eolian, fluvial, and lacustrine processes in northwestern Indiana. Glacial and proglacial deposits include diamicton, glaciofluvial, and glaciolacustrine sediments deposited during the Wisconsin Episode glaciation. Non-glacial deposits include eolian, alluvial, paludal, and lacustrine sediments deposited during the late Wisconsin Episode and Holocene. Silurian and Devonian bedrock directly underlie the late Wisconsin Episode glacial sediments and non-glacial sediments deposited during the Holocene. Unconsolidated deposits were characterized through field observations; new and archived borehole data; lithologic information from the Indiana Department of Natural Resources water well database; and soils data from the U.S. Department of Agriculture, Natural Resource Conservation Service, Soil Survey Geographic (SSURGO) database. A light detection and ranging (LiDAR)-based digital elevation model was used in combination with geologic data to identify landforms and infer contacts between unconsolidated units. Summary descriptions of mapped units are listed on the map sheet with detailed descriptions in the accompanying pamphlet. In addition to the map and pamphlet, a composite spatial data set that conforms to the standardized database schema known as GeMS (Geologic Map Schema) is also available for download. Metadata records associated with each element within the spatial data set contain detailed descriptions of their purpose, constituent entities, and attributes. This geologic map was funded in part through the U.S. Geological Survey Great Lakes Geologic Mapping Coalition program under Cooperative Agreement No. G22AC00550.

Read full abstract
  • Journal IconIndiana Journal of Earth Sciences
  • Publication Date IconMay 30, 2025
  • Author Icon Henry Munro Loope + 1
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Identifying the Latest Displacement and Long-Term Strong Earthquake Activity of the Haiyuan Fault Using High-Precision UAV Data, NE Tibetan Plateau

Strong earthquake activity along fault zones can lead to the displacement of geomorphic units such as gullies and terraces while preserving earthquake event data through changes in sedimentary records near faults. The quantitative analysis of these characteristics facilitates the reconstruction of significant earthquake activity history along the fault zone. Recent advancements in acquisition technology for high-precision and high-resolution topographic data have enabled more precise identification of displacements caused by fault activity, allowing for a quantitative assessment of the characteristics of strong earthquakes on faults. The 1920 Haiyuan earthquake, which occurred on the Haiyuan fault in the northeastern Tibetan Plateau, resulted in a surface rupture zone extending nearly 240 km. Although clear traces of surface rupture have been well preserved along the fault, debate regarding the maximum displacement is ongoing. In this study, we focused on two typical offset geomorphic sites along the middle segment of the Haiyuan fault that were previously identified as having experienced the maximum displacement during the Haiyuan earthquake. High-precision geomorphologic images of the two sites were obtained through unmanned aerial vehicle (UAV) surveys, which were combined with light detection and ranging (LiDAR) data along the fault zone. Our findings revealed that the maximum horizontal displacement of the Haiyuan earthquake at the Shikaguan site was approximately 5 m, whereas, at the Tangjiapo site, it was approximately 6 m. A cumulative offset probability distribution (COPD) analysis of high-density fault displacement measurements along the ruptures indicated that the smallest offset clusters on either side of the Ganyanchi Basin were 4.5 and 5.1 m long. This analysis further indicated that the average horizontal displacements of the Haiyuan earthquake were approximately 4–6 m. Further examination of multiple gullies and geomorphic unit displacements at the Shikatougou site, along with a detailed COPD analysis of dense displacement measurements within a specified range on both sides, demonstrated that the cumulative displacement within 30 m of this section of the Haiyuan fault exhibited at least five distinct displacement clusters. These dates may represent the results of five strong earthquake events in this fault segment over the past 10,000–13,000 years. The estimated magnitude, derived from the relationship between displacement and magnitude, ranged from Mw 7.4 to 7.6, with an uneven recurrence interval of approximately 2500–3200 years.

Read full abstract
  • Journal IconRemote Sensing
  • Publication Date IconMay 29, 2025
  • Author Icon Xin Sun + 6
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Optimizing Camera Settings and Unmanned Aerial Vehicle Flight Methods for Imagery-Based 3D Reconstruction: Applications in Outcrop and Underground Rock Faces

The structure from motion (SfM) and multiview stereo (MVS) techniques have proven effective in generating high-quality 3D point clouds, particularly when integrated with unmanned aerial vehicles (UAVs). However, the impact of image quality—a critical factor for SfM–MVS techniques—has received limited attention. This study proposes a method for optimizing camera settings and UAV flight methods to minimize point cloud errors under illumination and time constraints. The effectiveness of the optimized settings was validated by comparing point clouds generated under these conditions with those obtained using arbitrary settings. The evaluation involved measuring point-to-point error levels for an indoor target and analyzing the standard deviation of cloud-to-mesh (C2M) and multiscale model-to-model cloud comparison (M3C2) distances across six joint planes of a rock mass outcrop in Seoul, Republic of Korea. The results showed that optimal settings improved accuracy without requiring additional lighting or extended survey time. Furthermore, we assessed the performance of SfM–MVS under optimized settings in an underground tunnel in Yeoju-si, Republic of Korea, comparing the resulting 3D models with those generated using Light Detection and Ranging (LiDAR). Despite challenging lighting conditions and time constraints, the results suggest that SfM–MVS with optimized settings has the potential to produce 3D models with higher accuracy and resolution at a lower cost than LiDAR in such environments.

Read full abstract
  • Journal IconRemote Sensing
  • Publication Date IconMay 28, 2025
  • Author Icon Junsu Leem + 6
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Experimental Study of Lidar System for a Static Object in Adverse Weather Conditions

Thanks to light detection and ranging (LiDAR), unmanned ground vehicles (UGVs) are able to detect different objects in their environment and measure the distance between them. This device gives the ability to see its surroundings in real time. However, the accuracy of LiDAR can be reduced, especially in rainy weather, fog, urban smog and the like. These factors can have disastrous consequences as they increase the errors in the vehicle’s control computer. The aim of this research was to determine the most appropriate LiDAR frequency for static objects, depending on the distance to them and the scanning frequency in different weather conditions; therefore, it is based on empiric data obtained by using the RoboPeak A1M8 LiDAR. The results obtained in rainy conditions are compared with the same ones in clear weather, using stochastic methods. A direct influence of both the frequencies used and the rain on the accuracy of the LiDAR measurements was found. The range measurement errors increase in rainy weather; as the scanning frequency increases, the results become more accurate but capture a smaller number of object points. The higher frequencies lead to about five times less error at the farthest distances compared to the lower frequencies.

Read full abstract
  • Journal IconJournal of Sensor and Actuator Networks
  • Publication Date IconMay 26, 2025
  • Author Icon Saulius Japertas + 2
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Application of high-precision terrestrial light detection and ranging to determine the dislocation geomorphology of Yumen Fault, China

Ground-based three-dimensional (3D) light detection and ranging (LiDAR) is used to collect high-density point clouds of terrain for high-precision topographic survey, remove information on surface vegetation, and allow for the study of fault rupture. Selected as the study area was the west side of the Yumen Fault in China, characterized by a thrust nappe, and information on this typical fault landform. Fundamental issues such as ground-based 3D LiDAR for field collection, data processing, and 3D fault modeling were then analyzed. Finally, the high-precision topography of the surface rupture in this area was obtained, revealing the typical dextral strike–slip dislocation along the fault zone. In the process of data processing, the iterative closet point (ICP) and the optimal point cloud density were used to improve the high efficiency and precision of data processing. Finally, based on point cloud data processing, a digital elevation model (DEM) with a spatial resolution of 0.1 m was obtained for the study area to classify the geomorphic unit, obtain information on the fault scarp and fault broken gully terrain, and quantitatively study and analyze the horizontal dislocation of gully and displacement distance of the fault scarp. This process revealed several seismic events along the fault zone, accompanied by a typical dextral strike–slip phenomenon.

Read full abstract
  • Journal IconFrontiers in Remote Sensing
  • Publication Date IconMay 26, 2025
  • Author Icon Shuai Kang + 4
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Performance Comparison of Multipixel Biaxial Scanning Direct Time-of-Flight Light Detection and Ranging Systems With and Without Imaging Optics.

The laser pulse detection probability of a scanning direct time-of-flight light detection and ranging (LiDAR) measurement is evaluated based on the optical signal distribution on a multipixel single photon avalanche diode (SPAD) array. These detectors intrinsically suffer from dead-times after the successful detection of a single photon and, thus, allow only for limited counting statistics when multiple returning laser photons are imaged on a single pixel. By blurring the imaged laser spot, the transition from single-pixel statistics with high signal intensity to multipixel statistics with less signal intensity is examined. Specifically, a comparison is made between the boundary cases in which (i) the returning LiDAR signal is focused through optics onto a single pixel and (ii) the detection is performed without lenses using all available pixels on the sensor matrix. The omission of imaging optics reduces the overall system size and minimizes optical transfer losses, which is crucial given the limited laser emission power due to safety standards. The investigation relies on a photon rate model for interfering (background) and signal light, applied to a simulated first-photon sensor architecture. For single-shot scenarios that reflect the optimal use of the time budget in scanning LiDAR systems, the lens-less and blurred approaches can achieve comparable or even superior results to the focusing system. This highlights the potential of fully solid-state scanning LiDAR systems utilizing optical phase arrays or multidirectional laser chips.

Read full abstract
  • Journal IconSensors (Basel, Switzerland)
  • Publication Date IconMay 21, 2025
  • Author Icon Konstantin Albert + 6
Cite IconCite
Chat PDF IconChat PDF
Save

Hypergraph Convolution Network Classification for Hyperspectral and LiDAR Data.

Conventional remote sensing classification approaches based on single-source data exhibit inherent limitations, driving significant research interest in improved multimodal data fusion techniques. Although deep learning methods based on convolutional neural networks (CNNs), transformers, and graph convolutional networks (GCNs) have demonstrated promising results in fusing complementary multi-source data, existing methodologies demonstrate limited efficacy in capturing the intricate higher-order spatial-spectral dependencies among pixels. To overcome these limitations, we propose HGCN-HL, a novel multimodal deep learning framework that integrates hypergraph convolutional networks (HGCNs) with lightweight CNNs. Specifically, an adaptive weight mechanism is first designed to preliminarily fuse the spectral features of hyperspectral imaging (HSI) and Light Detection and Ranging (LiDAR), enhancing the feature representation ability. Then, superpixel-based dynamic hyperedge construction enables the joint characterization of homogeneous regions across both modalities, significantly boosting large-scale object recognition accuracy. Finally, local detail features are captured through a parallel CNN branch, complementing the global relationship modeling of the HGCN. Comprehensive experiments conducted on three benchmark datasets demonstrate the superior performance of our method compared to existing state-of-the-art approaches. Notably, the proposed framework achieves significant improvements in both training efficiency and inference speed while maintaining competitive accuracy.

Read full abstract
  • Journal IconSensors (Basel, Switzerland)
  • Publication Date IconMay 14, 2025
  • Author Icon Lei Wang + 1
Cite IconCite
Chat PDF IconChat PDF
Save

Classification of Forest Stratification and Evaluation of Forest Stratification Changes over Two Periods Using UAV-LiDAR

The demand for spatially explicit and comprehensive forest attribute data has continued to increase. Light detection and ranging (LiDAR) remote sensing, which can measure three-dimensional (3D) forest attributes, plays a significant role. However, only a few studies have used uncrewed aerial vehicle (UAV)-LiDAR to extract the characteristics of the 3D structure of the forest understory. Therefore, this study proposes a method for classifying and mapping forest stratification and evaluating forest stratification changes using multitemporal UAV-LiDAR data. The study area is a forest of approximately 25 ha on the west side of the Expo Commemorative Park (Suita City, Osaka Prefecture, Japan). Three-dimensional point cloud models from two measurement periods during the leaf-fall season were used. Forest stratification was classified using time-series clustering of 2024 data. The classification of forest stratification and its spatial distribution effectively reflected the actual site conditions. By applying time-series clustering, the forest stratification was successfully classified using only UAV-LiDAR data. Changes in forest stratification were evaluated using data from 2022 to 2024. In areas where changes in forest stratification were evaluated as significant, evidence of tree felling was confirmed. In addition, changes in forest stratification were quantitatively evaluated. The proposed method uses only UAV-LiDAR, which is highly versatile; thus, it is expected to apply to various forests. The results of this study are expected to deepen our ecological understanding of forests and contribute to forest monitoring and management.

Read full abstract
  • Journal IconRemote Sensing
  • Publication Date IconMay 10, 2025
  • Author Icon Hideyuki Niwa
Open Access Icon Open Access
Cite IconCite
Chat PDF IconChat PDF
Save

Fusão de Dados HSI, UHR e LiDAR para Caracterização do Ambiente Urbano

The study of the urban environment is undoubtedly the key to moving towards sustainable transformations. However, remotely sensed observations within such domain are complex and challenging, as these areas present many similar spectral characteristics, making image analysis of urban areas a difficult task. Although sensors systems have been recently improved, they are alone still unable to attain a sufficient level of detail to qualitatively and quantitatively analyze targets of interest in an urban image. In this sense, multisource data fusion emerges as a feasible solution for detailed detection and interpretation of elements that compose an urban scene. This work aims to perform data fusion using a hyperspectral image (HSI), an optical RGB ultra-high-resolution image, and Light Detection and Ranging (LiDAR) data for a detailed characterization of an urban environment under the perspective of land cover. Seven datasets will be employed, including the separate RGB, HSI, and LiDAR data as well as their fusion. The latter one is used to demonstrate the potential of integrating information from manifold sensors when compared with the accuracy results of a unique sensor. The algorithm chosen to perform such classifications is Random Forest since it can handle large amounts of data and achieve satisfactory accuracy. The overall accuracy reached by the data fusion set shows to be significantly superior to the ones obtained by the other datasets, demonstrating that the combined use of multisource data refines the classification results, allowing for an accurate and detailed level of classification legend.

Read full abstract
  • Journal IconRevista Brasileira de Cartografia
  • Publication Date IconMay 9, 2025
  • Author Icon Pâmela Carvalho Molina + 3
Cite IconCite
Chat PDF IconChat PDF
Save

A multi-modality ground-to-air cross-view pose estimation dataset for field robots

High-precision localization is critical for intelligent robotics in autonomous driving, smart agriculture, and military operations. While Global Navigation Satellite System (GNSS) provides global positioning, its reliability deteriorates severely in signal degraded environments like urban canyons. Cross-view pose estimation using aerial-ground sensor fusion offers an economical alternative, yet current datasets lack field scenarios and high-resolution LiDAR support.This work introduces a multimodal cross-view dataset addressing these gaps. It contains 29,940 synchronized frames across 11 operational environments (6 field environments, 5 urban roads), featuring: 1) 144-channel LiDAR point clouds, 2) ground-view RGB images, and 3) aerial orthophotos. Centimeter-accurate georeferencing is ensured through GNSS fusion and post-processed kinematic positioning. The dataset uniquely integrates field environments and high-resolution LiDAR-aerial-ground data triplets, enabling rigorous evaluation of 3-DoF pose estimation algorithms for orientation alignment and coordinate transformation between perspectives.This resource supports development of robust localization systems for field robots in GNSS-denied conditions, emphasizing cross-view feature matching and multisensor fusion. Light Detection And Ranging (LiDAR)-enhanced ground truth further distinguishes its utility for complex outdoor navigation research.

Read full abstract
  • Journal IconScientific Data
  • Publication Date IconMay 7, 2025
  • Author Icon Xia Yuan + 3
Open Access Icon Open Access
Cite IconCite
Chat PDF IconChat PDF
Save

A Dynamic Blind Zone Simulation and Analysis Model for Roadside Light Detection and Ranging Sensor Deployment Considering the Full Roadway Terrain and Vehicle Dynamics

Blind zones in light detection and ranging (LiDAR) sensors arise from their limited physical field of view and obstructions caused by static infrastructure or moving objects. Although originally intended for vehicle-based applications, LiDAR sensors are now increasingly deployed in roadside infrastructure for traffic monitoring and connected and automated vehicle safety and mobility applications. However, there is a dearth of robust tools for analyzing their detection range, resolution, and other characteristics in such settings. This study introduces a three-dimensional (3D) blind zone simulation model for analyzing the detection characteristics of roadside LiDAR sensor deployment. The model replicates the impact of static infrastructure conditions and dynamic blind zones during live traffic. Initially, a real-world digital surface model (DSM) captures 3D data of road surfaces and obstructing infrastructure objects. Optical geometry models then assess blind zone severity across various roadway areas. Subsequent 3D vehicle shape and dynamic simulations evaluate blind zone distributions under typical traffic conditions. The model’s effectiveness is validated using field 3D point cloud data and vehicle detection data collected from a roadside LiDAR site on Route 18 in New Brunswick, NJ. Evaluation results demonstrate the model’s capability in analyzing complex static and dynamic blind zone distributions, offering insights for optimizing LiDAR sensor location, height, tilting angle, and manufacturer configuration parameters to minimize sensing blind zones. For code availability, see https://github.com/rutgerstslab/LiDAR-Coverage-Analysis .

Read full abstract
  • Journal IconTransportation Research Record: Journal of the Transportation Research Board
  • Publication Date IconMay 6, 2025
  • Author Icon Yi Ge + 2
Cite IconCite
Chat PDF IconChat PDF
Save

Using LiDAR‐Based DEM Elevation Difference Calculations to Estimate Net Streambank Erosion in an Iowa River, USA

ABSTRACTStreambank erosion is an important source of sediment to river systems but is difficult to quantify at watershed scales. In this study, high‐resolution Light Detection and Ranging (LiDAR) measurements collected from 2009 and 2020 were used to quantify the difference in land surface elevation that occurred along the fourth and fifth‐order streams in Old Mans Creek watershed in southeast Iowa. Study objectives were to quantify the volume of streambank sediment erosion and deposition occurring along the river systems and compare net channel erosion to watershed sediment export. Results indicated that streambank erosion and deposition along the fourth and fifth‐order channels totaled nearly 720,000 m3 and 148,000 m3, respectively, over the 11‐year study period. Five times more streambank erosion occurred than deposition, and the difference between the two totals (net sediment erosion) comprised 77% of the sediment export from the watershed. The contribution of streambank sediment to basin export, along with estimates of mean annual streambank recession derived from the analyses, were consistent with results reported in other studies of streambank erosion. The LiDAR differencing methodology was able to identify areas of both sediment erosion and deposition occurring in the stream channels and quantify the net difference, which is related to watershed‐scale sediment export.

Read full abstract
  • Journal IconRiver Research and Applications
  • Publication Date IconMay 6, 2025
  • Author Icon Calvin F Wolter + 3
Cite IconCite
Chat PDF IconChat PDF
Save

SWIR-transparent silicon hot-carrier photodetector for unobstructed real-time laser power monitoring

Optically transparent photodetectors are becoming essential components in next-generation photonic technologies such as augmented reality and light-field imaging. While transparent photodetectors have been extensively developed for the visible spectrum, extending this capability to the short-wavelength infrared (SWIR) regime remains a significant challenge. This is primarily due to the lack of suitable transparent electrodes and the difficulty in minimizing the thickness of light-absorbing layers. In this work, we demonstrate an SWIR-transparent silicon hot-carrier photodetector, enabled by an ultrathin silver film topped with a high-refractive-index overlayer, serving as a transparent electrode. The electrode design exploits destructive interference to minimize reflection, achieving an 86% transmittance at 1300 nm and a normalized transmittance of 123% relative to a silicon substrate. Integrating this electrode into a silicon substrate forms a metal–silicon Schottky junction for SWIR photon detection through hot-carrier injection, with photon absorption confined to a sub-10 nm metal layer. By leveraging the optical transparency of our photodetector, we demonstrate a laser power monitoring strategy that enables real-time optical power measurements without compromising the spatial profile of the laser beam and altering its optical path. This work paves the way for compact, streamlined designs in applications such as optical data transmission and light detection and ranging (LiDAR), where continuous laser power monitoring is crucial.

Read full abstract
  • Journal IconOptica
  • Publication Date IconMay 2, 2025
  • Author Icon Eui-Hyoun Ryu + 8
Cite IconCite
Chat PDF IconChat PDF
Save

Unveiling the performance and influential factors of GEDI L2A for building height retrieval

ABSTRACT Estimating building heights is essential for urban planning, disaster assessment, and sustainable development. While the Global Ecosystem Dynamics Investigation (GEDI) Light Detection and Ranging (LiDAR) was primarily designed for forest measurements, it also holds potential for large-scale building height retrieval. This study evaluates the performance and influential factors of GEDI L2A version 2 (V2) data for building height retrieval by comparing it with the airborne LiDAR-derived normalized digital surface model (nDSM). To ensure data reliability, we refined the GEDI dataset by excluding footprints outside buildings, filtering out low-quality footprints, removing footprints failing to detect ground elevation using the interquartile range (IQR) detection method, and excluding footprints with geolocation errors through an eight-direction offset approach. We assessed the effectiveness of different relative height (RH) metrics and systematically analyzed key influential factors in building height retrieval. Results indicate that GEDI RH96 achieves the highest correlation with reference building heights (R2 = 0.82, MAE = 1.67 m, RMSE = 4.40 m, rRMSE = 34.46%). GEDI demonstrates the highest accuracy for mid- and high-rise buildings, whereas low-rise buildings (<5 m) exhibit lower accuracy and tend to be overestimated (RMSE = 2.17 m, rRMSE = 49.79%). Sensitivity and slope are the most significant factors influencing the accuracy of building height retrieval. GEDI data with sensitivity above 0.95 showed a 4.66% decrease in rRMSE compared to data with sensitivity above 0.90. Slope negatively affects building height retrieval accuracy. Building roof type has a moderate impact; flat-roof buildings exhibit a slight advantage over pitched- and curved-roof buildings, with rRMSE reductions of 1.86% and 4.74%, respectively. Neither GEDI beam type nor data acquisition time significantly affect the accuracy of height retrieval. Overall, this study provides valuable insights for optimizing GEDI data in building height retrieval, contributing to large-scale building height mapping.

Read full abstract
  • Journal IconGIScience & Remote Sensing
  • Publication Date IconMay 1, 2025
  • Author Icon Peimin Chen + 7
Cite IconCite
Chat PDF IconChat PDF
Save

Design and Implementation of a LiDAR Scanner for Inspecting Aircraft’s Internal Structures

Abstract: In modern aviation, effective maintenance is crucial for ensuring aircraft safety and performance. Inspecting internal components within aircraft panels poses significant challenges, often requiring specialized tools like borescopes. This project aims to develop a LiDAR-based prototype scanner to generate high-resolution 3D images of internal structures, providing engineers with a comprehensive view for maintenance inspections. The system utilizes LiDAR (Light Detection and Ranging) technology to capture depth data, which is processed to create detailed 3D models that enable the identification of issues such as misalignment's, cracks, or missing components. A micro controller, such as an Arduino Uno, interfaces with the LiDAR sensor to collect data, which is then transmitted to a laptop for visualization and analysis. This non-invasive approach offers a quick and accurate alternative to traditional inspection methods, reducing reliance on manual borescopic techniques. While the current version serves as a proof of concept, future enhancements could include industrial-grade LiDAR sensors and automated inspection systems to meet stringent aviation safety standards. Ultimately, this project seeks to demonstrate the potential of LiDAR technology to improve the efficiency and accuracy of aircraft maintenance workflows. Key Words: LiDAR, Aircraft Inspection, 3D Imaging, Maintenance, Non-Invasive Techniques

Read full abstract
  • Journal IconInternational Scientific Journal of Engineering and Management
  • Publication Date IconApr 30, 2025
  • Author Icon Athul Ajith
Cite IconCite
Chat PDF IconChat PDF
Save

Radiometric calibration and reflectivity inversion of spaceborne LiDAR

Light detection and ranging (LiDAR) onboard satellites are used not only for global height measurement but also to actively obtain global surface reflectance, which helps in separating surface targets on the basis of their reflectivity properties. In this paper, methods for obtaining surface reflectance with the LiDAR onboard the Terrestrial Ecosystem Carbon Inventory Satellite, which is nicknamed Goumang, are studied. The radiation model for LiDAR to obtain surface reflectance is constructed on the basis of the transmission path of the laser. A method for obtaining the coefficients of the radiometric calibration, which are key parameters for calculating surface reflectance in the radiation model, is designed. These coefficients are derived from automated data collected at the Chinese radiometric calibration site in Dunhuang. The uncertainty of this radiometric calibration is evaluated, yielding a value of approximately 5%. Additionally, a retrieval method for obtaining global surface reflectance based on global aerosol products from other satellites is introduced. The results are compared with ground-measured values, revealing a relative deviation of approximately 10%. This research provides a feasible pathway for retrieving global surface reflectance by LiDAR.

Read full abstract
  • Journal IconApplied Optics
  • Publication Date IconApr 28, 2025
  • Author Icon Xionghao Huang + 6
Cite IconCite
Chat PDF IconChat PDF
Save

Real LiDAR point cloud synthesis for 3D object detection in snowy weather

Light Detection And Ranging (LiDAR) sensors can generate a number of sequential 3D point clouds, which are widely deployed in many real-world systems. 3D object detection in point clouds, is one of the most fundamental tasks. Unfortunately, the existing 3D object detection methods degrade in snowy weather, because in that situation the annotated samples are difficult to collect. To solve this issue, we propose a novel GAN-based Snowfall Point-cloud AugmentOR (GAN spaor ) to generate high-quality synthetic snowfall point clouds as augmentations. The basic idea of GAN spaor is to transfer annotated point clouds to snowfall versions by simultaneously learning the global style of real snowfall point clouds and the local details of physics-induced ones. Our framework fuses data-driven and physical modeling methods for rapidly generating data in snowy weather. To evaluate the effectiveness of GAN spaor , we employ a number of recent 3D object detection methods and train them by using the synthetic samples of GAN spaor as auxiliary augmentations. Moreover, we conduct a comparative analysis of the characteristics of the data distributions of the snowy point clouds synthesized by GAN spaor . Experimental results demonstrate that GAN spaor can improve the performance of 3D object detection methods compared with other existing snowfall point cloud simulators.

Read full abstract
  • Journal IconProceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering
  • Publication Date IconApr 28, 2025
  • Author Icon Yuhao Chen + 5
Cite IconCite
Chat PDF IconChat PDF
Save

A LiDAR camera with an edge

Abstract A novel light detection and ranging (LiDAR) design was proposed and demonstrated using just a conventional global shutter complementary metal-oxide-semiconductor (CMOS) camera. Utilizing the jittering rising edge of the camera shutter, the distance of an object can be obtained by averaging hundreds of camera frames. The intensity (brightness) of an object in the image is linearly proportional to the distance from the camera. The achieved time precision is about one nanosecond while the range can reach beyond 50 m using a modest setup. The new design offers a simple yet powerful alternative to existing LiDAR techniques.

Read full abstract
  • Journal IconMeasurement Science and Technology
  • Publication Date IconApr 28, 2025
  • Author Icon Blessed Oguh + 3
Cite IconCite
Chat PDF IconChat PDF
Save

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2025 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers