Understanding defoliation of Pinus plantations in the Mediterranean mountains using tree segmentation and ALS time series.

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Understanding defoliation of Pinus plantations in the Mediterranean mountains using tree segmentation and ALS time series.

Similar Papers
  • PDF Download Icon
  • Research Article
  • Cite Count Icon 4
  • 10.3390/f8100401
Countering Negative Effects of Terrain Slope on Airborne Laser Scanner Data Using Procrustean Transformation and Histogram Matching
  • Oct 21, 2017
  • Forests
  • Endre Hansen + 5 more

Forest attributes such as tree heights, diameter distribution, volumes, and biomass can be modeled utilizing the relationship between remotely sensed metrics as predictor variables, and measurements of forest attributes on the ground. The quality of the models relies on the actual relationship between the forest attributes and the remotely sensed metrics. The processing of airborne laser scanning (ALS) point clouds acquired under heterogeneous terrain conditions introduces a distortion of the three-dimensional shape and structure of the ALS data for tree crowns and thus errors in the derived metrics. In the present study, Procrustean transformation and histogram matching were proposed as a means of countering the distortion of the ALS data. The transformations were tested on a dataset consisting of 192 field plots of 250 m2 in size located on a gradient from gentle to steep terrain slopes in western Norway. Regression models with predictor variables derived from (1) Procrustean transformed- and (2) histogram matched point clouds were compared to models with variables derived from untransformed point clouds. Models for timber volume, basal area, dominant height, Lorey’s mean height, basal area weighted mean diameter, and number of stems were assessed. The results indicate that both (1) Procrustean transformation and (2) histogram matching can be used to counter crown distortion in ALS point clouds. Furthermore, both techniques are simple and can easily be implemented in the traditional processing chain of ALS metrics extraction.

  • Research Article
  • 10.4233/uuid:8900fac8-a76c-482a-b280-e1758783b5b3
Automatic Object Extraction from Airborne Laser Scanning Point Clouds for Digital Base Map Production
  • Feb 17, 2021
  • E Widyaningrum

A base map provides essential geospatial information for applications such as urban planning, intelligent transportation systems, and disaster management. Buildings and roads are the main ingredients of a base map and are represented by polygons. Unfortunately, manually delineating their boundaries from remote sensing data is time consuming and labour intensive. Airborne laser scanning (ALS) point clouds provide dense and accurate 3D positional information. Automatic extraction of buildings and roads from 3D point clouds is challenging because of their irregular shapes, occlusions in the data, and irregularity of ALS point clouds. This study focuses on two particular objectives: (i) accurate classification of a large volume of ALS 3D point clouds; and (ii) smooth and accurate building and road outline extraction. To achieve the classification objective, we perform point-wise deep learning to classify an ALS point cloud of a complex urban scene in Surabaya, Indonesia. The point cloud is colored by airborne orthophotos. Training data is obtained from an existing 2D topographic base map by a semi-automatic method proposed in this research. A dynamic-graph convolutional neural network is used to classify the point cloud into four classes: bare land, trees, buildings, and roads. We investigate effective input feature combinations for outdoor point cloud classification. A highly acceptable classification result of 91.8% overall accuracy is achieved when using the full combination of RGB color and LiDAR features. To address the objective of outline extraction, we propose building and road outline extraction methods that run directly on ALS point cloud data. For accurate and smooth building outline extraction, we propose two different methods. First, we develop the ordered Hough transform (OHT), which is an extension of the traditional Hough transform, by explicitly incorporating the sequence of points to form the outline. Second, we propose a new method based on Medial Axis Transform (MAT) skeletons which takes advantage of the skeleton points to detect building corners. The OHT method is resistant to noise but it requires prior knowledge on a building’s main directions. On the contrary, the MAT-based method does not require such orientation initialization but is more sensitive to noise on building edges. We compare the results of our building outline extraction methods to an existing RANSAC-based method, in terms of geometric accuracy, completeness of building corners, and computation time, and demonstrate that the MAT-based approach has the highest geometric accuracy, results in more complete building corners, and is slightly faster than other methods. For road network extraction, we develop a method based on skeletonization, which results in complete and continuous road centerlines and boundaries. In our study area, several roads are disrupted and disconnected due to trees. We design a tree-constrained approach to fill road gaps and integrate road width estimated from a medial axis algorithm. Comparison to reference data shows that the proposed method is able to extract almost all existing roads in the study area, and even detects roads that were not present in the reference due to human errors. We conclude that our object extraction methods enable a complete automatic procedure, extracting more accurate building and road outlines from ALS point cloud data. This contributes to a higher automation readiness level for a faster and cheaper base map production.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 6
  • 10.3390/rs13173536
A Comparison of ALS and Dense Photogrammetric Point Clouds for Individual Tree Detection in Radiata Pine Plantations
  • Sep 6, 2021
  • Remote Sensing
  • Irfan A Iqbal + 3 more

Digital aerial photogrammetry (DAP) has emerged as a potentially cost-effective alternative to airborne laser scanning (ALS) for forest inventory methods that employ point cloud data. Forest inventory derived from DAP using area-based methods has been shown to achieve accuracy similar to that of ALS data. At the tree level, individual tree detection (ITD) algorithms have been developed to detect and/or delineate individual trees either from ALS point cloud data or from ALS- or DAP-based canopy height models. An examination of the application of ITDs to DAP-based point clouds has not yet been reported. In this research, we evaluate the suitability of DAP-based point clouds for individual tree detection in the Pinus radiata plantation. Two ITD algorithms designed to work with point cloud data are applied to dense point clouds generated from small- and medium-format photography and to an ALS point cloud. Performance of the two ITD algorithms, the influence of stand structure on tree detection rates, and the relationship between tree detection rates and canopy structural metrics are investigated. Overall, we show that there is a good agreement between ALS- and DAP-based ITD results (proportion of false negatives for ALS, SFP, and MFP was always lower than 29.6%, 25.3%, and 28.6%, respectively, whereas, the proportion of false positives for ALS, SFP, and MFP was always lower than 39.4%, 30.7%, and 33.7%, respectively). Differences between small- and medium-format DAP results were minor (for SFP and MFP, differences between recall, precision, and F-score were always less than 0.08, 0.03, and 0.05, respectively), suggesting that DAP point cloud data is robust for ITD. Our results show that among all the canopy structural metrics, the number of trees per hectare has the greatest influence on the tree detection rates.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 1
  • 10.3390/s24134036
Extraction of Moso Bamboo Parameters Based on the Combination of ALS and TLS Point Cloud Data.
  • Jun 21, 2024
  • Sensors (Basel, Switzerland)
  • Suying Fan + 5 more

Extracting moso bamboo parameters from single-source point cloud data has limitations. In this article, a new approach for extracting moso bamboo parameters using airborne laser scanning (ALS) and terrestrial laser scanning (TLS) point cloud data is proposed. Using the field-surveyed coordinates of plot corner points and the Iterative Closest Point (ICP) algorithm, the ALS and TLS point clouds were aligned. Considering the difference in point distribution of ALS, TLS, and the merged point cloud, individual bamboo plants were segmented from the ALS point cloud using the point cloud segmentation (PCS) algorithm, and individual bamboo plants were segmented from the TLS and the merged point cloud using the comparative shortest-path (CSP) method. The cylinder fitting method was used to estimate the diameter at breast height (DBH) of the segmented bamboo plants. The accuracy was calculated by comparing the bamboo parameter values extracted by the above methods with reference data in three sample plots. The comparison results showed that by using the merged data, the detection rate of moso bamboo plants could reach up to 97.30%; the R2 of the estimated bamboo height was increased to above 0.96, and the root mean square error (RMSE) decreased from 1.14 m at most to a range of 0.35-0.48 m, while the R2 of the DBH fit was increased to a range of 0.97-0.99, and the RMSE decreased from 0.004 m at most to a range of 0.001-0.003 m. The accuracy of moso bamboo parameter extraction was significantly improved by using the merged point cloud data.

  • Research Article
  • Cite Count Icon 15
  • 10.1080/15481603.2014.950117
Extracting buildings from airborne laser scanning point clouds using a marked point process
  • Aug 27, 2014
  • GIScience & Remote Sensing
  • Bisheng Yang + 2 more

Automatic extraction of buildings from airborne laser scanning (ALS) point clouds is essential for 3D building reconstruction. This paper presents a two-part approach for extracting buildings from ALS data. First, building objects are extracted from ALS data by a marked point process using the Gibbs energy model of buildings and sampled by a reversible jump Markov chain Monte Carlo algorithm. Second, a refinement operation is performed to filter the non-building points and false building objects before extracting buildings from the detected building objects. Experimental results and evaluation using ISPRS benchmark data-sets showed the robustness of the proposed method.

  • Conference Article
  • 10.4271/2025-01-8694
A Comparison of UAV LiDAR and Terrestrial Laser Scanner Accuracies
  • Apr 1, 2025
  • Steven Foltz + 2 more

<div class="section abstract"><div class="htmlview paragraph">The accident reconstruction community frequently uses Terrestrial LiDAR (TLS) to capture accurate 3D images of vehicle accident sites. This paper compares the accuracy, workflow, benefits, and challenges of Unmanned Aerial Vehicle (UAV) LiDAR, or Airborne Laser Scanning (ALS), to TLS. Two roadways with features relevant to accident reconstruction were selected for testing. ALS missions were conducted at an altitude of 175 feet and a velocity of 4 miles per hour at both sites, followed by 3D scanning using TLS. Survey control points were established to minimize error during cloud-to- cloud TLS registration and to ensure accurate alignment of ALS and TLS point clouds.</div><div class="htmlview paragraph">After data capture, the ALS point cloud was analyzed against the TLS point cloud. Approximately 80% of ALS points were within 1.8 inches of the nearest TLS point, with 64.8% at the rural site and 59.7% at the suburban site within 1.2 inches. These findings indicate that UAV-based LiDAR can achieve comparable accuracy to TLS in accident site documentation, offering potential advantages in efficiency, safety, and accessibility.</div></div>

  • Research Article
  • Cite Count Icon 4
  • 10.1109/access.2022.3158438
A Deep Neural Network Using Double Self-Attention Mechanism for ALS Point Cloud Segmentation
  • Jan 1, 2022
  • IEEE Access
  • Lili Yu + 2 more

Airborne laser scanning (ALS) point cloud segmentation is an essential procedure for 3D data understanding and applications. This task is challenging due to the unstructured, disordered, and sparse distribution of the point cloud. PointNet++ is a well known end-to-end learning network for point cloud segmentation without fully exploring the local and contextual features, which are less efficient and accurate in capturing the complexity of point clouds. On this basis, we design a novel encoder-decoder network architecture to obtain the semantic features of the ALS point cloud at different levels and achieve a better segmentation effect. The improved local feature aggregation module can merge the deep feature of the point cloud, combining local and global self-attention convolutional networks. It can adaptively explore the inherent semantics feature of points and capture more extensive context information of ALS point cloud, termed DSPNet++. Finally, the conditional random field optimization model can be used to refine the segmentation results. We evaluated the performance of our method on the Vaihingen dataset of the International Society for Photogrammetry and Remote Sensing (ISPRS) and the GML(B) 3D dataset. Experimental results show that our method fully exploits the semantic feature of the ALS point cloud and can achieve higher accuracy. A comparative study with established deep learning models also confirms that our proposed method has outstanding performance in the ALS point cloud segmentation task.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 28
  • 10.3390/rs6098405
Automatic Vehicle Extraction from Airborne LiDAR Data Using an Object-Based Point Cloud Analysis Method
  • Sep 5, 2014
  • Remote Sensing
  • Jixian Zhang + 3 more

Automatic vehicle extraction from an airborne laser scanning (ALS) point cloud is very useful for many applications, such as digital elevation model generation and 3D building reconstruction. In this article, an object-based point cloud analysis (OBPCA) method is proposed for vehicle extraction from an ALS point cloud. First, a segmentation-based progressive TIN (triangular irregular network) densification is employed to detect the ground points, and the potential vehicle points are detected based on the normalized heights of the non-ground points. Second, 3D connected component analysis is performed to group the potential vehicle points into segments. At last, vehicle segments are detected based on three features, including area, rectangularity and elongatedness. Experiments suggest that the proposed method is capable of achieving higher accuracy than the exiting mean-shift-based method for vehicle extraction from an ALS point cloud. Moreover, the larger the point density is, the higher the achieved accuracy is.

  • Research Article
  • Cite Count Icon 20
  • 10.1016/j.isprsjprs.2022.01.012
VD-LAB: A view-decoupled network with local-global aggregation bridge for airborne laser scanning point cloud classification
  • Feb 10, 2022
  • ISPRS Journal of Photogrammetry and Remote Sensing
  • Jihao Li + 6 more

VD-LAB: A view-decoupled network with local-global aggregation bridge for airborne laser scanning point cloud classification

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 47
  • 10.3390/rs13030472
PointNet++ Network Architecture with Individual Point Level and Global Features on Centroid for ALS Point Cloud Classification
  • Jan 29, 2021
  • Remote Sensing
  • Yang Chen + 4 more

Airborne laser scanning (ALS) point cloud has been widely used in the fields of ground powerline surveying, forest monitoring, urban modeling, and so on because of the great convenience it brings to people’s daily life. However, the sparsity and uneven distribution of point clouds increases the difficulty of setting uniform parameters for semantic classification. The PointNet++ network is an end-to-end learning network for irregular point data and highly robust to small perturbations of input points along with corruption. It eliminates the need to calculate costly handcrafted features and provides a new paradigm for 3D understanding. However, each local region in the output is abstracted by its centroid and local feature that encodes the centroid’s neighborhood. The feature learned on the centroid point may not contain relevant information of itself for random sampling, especially in large-scale neighborhood balls. Moreover, the centroid point’s global-level information in each sample layer is also not marked. Therefore, this study proposed a modified PointNet++ network architecture which concentrates the point-level and global features on the centroid point towards the local features to facilitate classification. The proposed approach also utilizes a modified Focal Loss function to solve the extremely uneven category distribution on ALS point clouds. An elevation- and distance-based interpolation method is also proposed for the objects in ALS point clouds which exhibit discrepancies in elevation distributions. The experiments on the Vaihingen dataset of the International Society for Photogrammetry and Remote Sensing and the GML(B) 3D dataset demonstrate that the proposed method which provides additional contextual information to support classification achieves high accuracy with simple discriminative models and new state-of-the-art performance in power line categories.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 2
  • 10.3390/s21186193
Graph Attention Feature Fusion Network for ALS Point Cloud Classification.
  • Sep 15, 2021
  • Sensors
  • Jie Yang + 2 more

Classification is a fundamental task for airborne laser scanning (ALS) point cloud processing and applications. This task is challenging due to outdoor scenes with high complexity and point clouds with irregular distribution. Many existing methods based on deep learning techniques have drawbacks, such as complex pre/post-processing steps, an expensive sampling cost, and a limited receptive field size. In this paper, we propose a graph attention feature fusion network (GAFFNet) that can achieve a satisfactory classification performance by capturing wider contextual information of the ALS point cloud. Based on the graph attention mechanism, we first design a neighborhood feature fusion unit and an extended neighborhood feature fusion block, which effectively increases the receptive field for each point. On this basis, we further design a neural network based on encoder–decoder architecture to obtain the semantic features of point clouds at different levels, allowing us to achieve a more accurate classification. We evaluate the performance of our method on a publicly available ALS point cloud dataset provided by the International Society for Photogrammetry and Remote Sensing (ISPRS). The experimental results show that our method can effectively distinguish nine types of ground objects. We achieve more satisfactory results on different evaluation metrics when compared with the results obtained via other approaches.

  • Research Article
  • Cite Count Icon 49
  • 10.1016/j.isprsjprs.2021.04.017
GraNet: Global relation-aware attentional network for semantic segmentation of ALS point clouds
  • May 12, 2021
  • ISPRS Journal of Photogrammetry and Remote Sensing
  • Rong Huang + 2 more

GraNet: Global relation-aware attentional network for semantic segmentation of ALS point clouds

  • Research Article
  • Cite Count Icon 9
  • 10.3390/s20236969
ALS Point Cloud Classification by Integrating an Improved Fully Convolutional Network into Transfer Learning with Multi-Scale and Multi-View Deep Features
  • Dec 6, 2020
  • Sensors (Basel, Switzerland)
  • Xiangda Lei + 5 more

Airborne laser scanning (ALS) point cloud has been widely used in various fields, for it can acquire three-dimensional data with a high accuracy on a large scale. However, due to the fact that ALS data are discretely, irregularly distributed and contain noise, it is still a challenge to accurately identify various typical surface objects from 3D point cloud. In recent years, many researchers proved better results in classifying 3D point cloud by using different deep learning methods. However, most of these methods require a large number of training samples and cannot be widely used in complex scenarios. In this paper, we propose an ALS point cloud classification method to integrate an improved fully convolutional network into transfer learning with multi-scale and multi-view deep features. First, the shallow features of the airborne laser scanning point cloud such as height, intensity and change of curvature are extracted to generate feature maps by multi-scale voxel and multi-view projection. Second, these feature maps are fed into the pre-trained DenseNet201 model to derive deep features, which are used as input for a fully convolutional neural network with convolutional and pooling layers. By using this network, the local and global features are integrated to classify the ALS point cloud. Finally, a graph-cuts algorithm considering context information is used to refine the classification results. We tested our method on the semantic 3D labeling dataset of the International Society for Photogrammetry and Remote Sensing (ISPRS). Experimental results show that overall accuracy and the average F1 score obtained by the proposed method is 89.84% and 83.62%, respectively, when only 16,000 points of the original data are used for training.

  • Research Article
  • Cite Count Icon 25
  • 10.1016/j.isprsjprs.2022.03.001
Weakly supervised semantic segmentation of airborne laser scanning point clouds
  • Mar 11, 2022
  • ISPRS Journal of Photogrammetry and Remote Sensing
  • Yaping Lin + 2 more

While modern deep learning algorithms for semantic segmentation of airborne laser scanning (ALS) point clouds have achieved considerable success, the training process often requires a large number of labelled 3D points. Pointwise annotation of 3D point clouds, especially for large scale ALS datasets, is extremely time-consuming work. Weak supervision that only needs a few annotation efforts but can make networks achieve comparable performance is an alternative solution. Assigning a weak label to a subcloud, a group of points, is an efficient annotation strategy. With the supervision of subcloud labels, we first train a classification network that produces pseudo labels for the training data. Then the pseudo labels are taken as the input of a segmentation network which gives the final predictions on the testing data. As the quality of pseudo labels determines the performance of the segmentation network on testing data, we propose an overlap region loss and an elevation attention unit for the classification network to obtain more accurate pseudo labels. The overlap region loss that considers the nearby subcloud semantic information is introduced to enhance the awareness of the semantic heterogeneity within a subcloud. The elevation attention helps the classification network to encode more representative features for ALS point clouds. For the segmentation network, in order to effectively learn representative features from inaccurate pseudo labels, we adopt a supervised contrastive loss that uncovers the underlying correlations of class-specific features. Extensive experiments on three ALS datasets demonstrate the superior performance of our model to the baseline method (Wei et al., 2020). With the same amount of labelling efforts, for the ISPRS benchmark dataset, the Rotterdam dataset and the DFC2019 dataset, our method rises the overall accuracy by 0.062, 0.112 and 0.031, and the average F1 score by 0.09, 0.178 and 0.043 respectively. Our code is publicly available at ‘https://github.com/yaping222/Weak_ALS.git’.

  • Research Article
  • Cite Count Icon 43
  • 10.1016/j.isprsjprs.2021.04.016
Local and global encoder network for semantic segmentation of Airborne laser scanning point clouds
  • Apr 30, 2021
  • ISPRS Journal of Photogrammetry and Remote Sensing
  • Yaping Lin + 3 more

Interpretation of Airborne Laser Scanning (ALS) point clouds is a critical procedure for producing various geo-information products like 3D city models, digital terrain models and land use maps. In this paper, we present a local and global encoder network (LGENet) for semantic segmentation of ALS point clouds. Adapting the KPConv network, we first extract features by both 2D and 3D point convolutions to allow the network to learn more representative local geometry. Then global encoders are used in the network to exploit contextual information at the object and point level. We design a segment-based Edge Conditioned Convolution to encode the global context between segments. We apply a spatial-channel attention module at the end of the network, which not only captures the global interdependencies between points but also models interactions between channels. We evaluate our method on two ALS datasets namely, the ISPRS benchmark dataset and DCF2019 dataset. For the ISPRS benchmark dataset, our model achieves state-of-the-art results with an overall accuracy of 0.845 and an average F1 score of 0.737. With regards to the DFC2019 dataset, our proposed network achieves an overall accuracy of 0.984 and an average F1 score of 0.834.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon