- Research Article
- 10.1111/phor.70037
- Jan 1, 2026
- The Photogrammetric Record
- B Mary Nathisiya + 1 more
ABSTRACT Monitoring vessel activity within Exclusive Economic Zones (EEZs) is essential for maritime security, environmental protection, and sustainable resource management. This study presents a novel framework that combines high‐resolution aerial imagery, photogrammetric geolocation techniques, and deep learning–based object detection to detect and monitor vessels near maritime boundaries. Using the SeaDronesSee dataset, vessels are automatically detected with YOLOv8 and georeferenced through image metadata, enabling accurate transformation from image to geographic coordinates. Spatial queries with EEZ boundary datasets are then applied to identify potential violations. Experimental evaluation demonstrates a detection accuracy of 98.3%, with robust performance across varied vessel types and imaging conditions. The framework is further supported by a lightweight security layer to ensure reliable transmission of boundary violation alerts. This integration of photogrammetric image analysis, automated object detection, and geospatial boundary validation provides an efficient and scalable approach to maritime monitoring, contributing to the advancement of remote sensing and photogrammetric applications in marine surveillance.
- Research Article
- 10.1111/phor.70035
- Jan 1, 2026
- The Photogrammetric Record
- Haitao Liu + 4 more
ABSTRACT Band Selection (BS) is an essential method in the classification of hyperspectral images, as it effectively decreases spectral redundancy in hyperspectral remote sensing data, lowers computational expenses, and identifies the best band subsets that offer improved discriminative power from numerous spectral dimensions. Evolutionary algorithms (EAs), known for their strong search capabilities, have been effectively utilized as BS techniques in hyperspectral image analysis. However, many current EA‐based BS methods encounter two significant issues: (1) a tendency to become trapped in local optima and experience premature convergence, and (2) a high sensitivity to the choice of initialization methods and hyperparameter settings, resulting in variable performance stability. To overcome these challenges, this study introduces a multi‐strategy enhanced salp swarm optimization method (MSSA) for optimal spectral band selection, referred to as MSSA‐BS. Initially, we improve the population initialization process to achieve a more even distribution of individuals within the initial population across the search space, which enhances diversity and reduces sensitivity to initialization. Furthermore, the algorithm incorporates a Lévy flight strategy and optimization of inertial weights to enhance search dynamics. This combined approach narrows the search area, speeds up evolutionary development, and aids in escaping local optima, thus boosting optimization effectiveness. Additionally, a chain follower mechanism is implemented to update the positions of the least effective individuals, further enhancing the algorithm's exploration capabilities. Together, these advancements systematically tackle the identified challenges. To assess the performance of MSSA‐BS, extensive experiments are carried out on three standard hyperspectral image (HSI) datasets. The findings indicate that MSSA‐BS achieves higher classification accuracy compared to various leading BS methods when used in conjunction with a support vector machine (SVM) classifier.
- Research Article
- 10.1111/phor.70038
- Jan 1, 2026
- The Photogrammetric Record
- Research Article
- 10.1111/phor.70040
- Jan 1, 2026
- The Photogrammetric Record
- Yongxiu Zhou + 4 more
ABSTRACT Current machine learning‐based landslide susceptibility assessment heavily relies on supervised classification, which necessitates both landslide and non‐landslide samples. However, the selection of non‐landslide samples (negative samples) suffers from significant epistemic uncertainty and a lack of standardized criteria, introducing bias that compromises model reliability. To bridge this gap, this study proposes a novel framework using One‐Class Classification (OCC), which eliminates the dependency on unreliable negative samples by training exclusively on landslide occurrences. We utilize a historical landslide dataset from Luding County, China, prior to the earthquake on September 5, 2022 as the training data, and post‐earthquake landslide data as the testing data. We model the data using three one‐class classifiers: One‐Class Support Vector Machines (OCSVM), Isolation Forest (IForest), and One‐Class K‐nearest neighbors (OCKNN). Then we compare the results with supervised learning classification algorithms based on Support Vector Machines (SVM), Random Forest, and KNN. The results show that OCSVM has a higher recall rate (0.865) than SVM (0.639) for high susceptibility areas. IForest has a higher recall rate (0.903) than RandomForest (0.884). OCKNN performs the best with a recall rate of 0.968, surpassing KNN classification (0.923). Furthermore, we employ SHAP to interpret the OCKNN model, identifying elevation as the most influential factor in landslide susceptibility, followed by TRI and slope. This enhances the interpretability of the model and provides insights into the driving factors of landslides. The results demonstrate that the proposed one‐class classification effectively addresses the issue of negative sample quality in traditional supervised learning. This study provides a new approach for landslide susceptibility assessment in data‐scarce regions.
- Research Article
- 10.1111/phor.70034
- Jan 1, 2026
- The Photogrammetric Record
- Shihao Deng + 4 more
ABSTRACT Although accurate classification of large‐format, high‐resolution remote sensing image is essential for land cover mapping, balancing computational efficiency with classification performance remains challenging. Traditional methods often incur high computational costs and achieve limited accuracy when applied to large‐format data. This study addressed the core challenge in the classification of large‐format and high‐resolution remote sensing images through a graph neural network classification method that combines a parallel fine segmentation strategy with graph neighborhood relationship optimization. It improved the computational efficiency bottleneck and classification accuracy. Our methodology comprises three key components. First, an adaptive compactness parameter‐based tiling method using simple linear iterative clustering (SLIC) generates uniform image patches through radiometric resolution downsampling and parallel allocation via Spark. Second, we propose a SLIC algorithm considering ground features (SLIC‐GF), which employs the Otsu method and ratio vegetation index (RVI) to distinguish vegetation/non‐vegetation pixels before fine segmentation. Finally, image objects are structured into graphs for classification via graph neural networks, with neighborhood‐based correction of misclassified nodes. Experimental results from three high‐resolution datasets show that our parallel segmentation strategy improves average computational efficiency threefold while reducing standard deviation (SD) and value range (R) by 41.5% and 51.9%, respectively. Compared to original SLIC, SLIC‐GF improves achievable segmentation accuracy (ASA) by 3.53%, reduces under‐segmentation error (UE) by 5.6%, and increases boundary recall (BR) by 2.8%. Furthermore, the graph attention network (GAT) with neighborhood optimization significantly enhances classification performance, yielding average improvements of 0.0165 in Kappa coefficient and 1.08% in overall accuracy (OA).
- Research Article
- 10.1111/phor.70028
- Oct 1, 2025
- The Photogrammetric Record
- Yiyang Tan + 3 more
ABSTRACT The interpretation of high‐resolution remote sensing imagery is essential for accurate land use classification. Optical and synthetic aperture radar (SAR) imagery exhibit complementary characteristics. Their fusion offers an effective approach to mitigating the limitations of single‐modal data and improving classification performance. However, the modal heterogeneity and complexity of optical and SAR imagery pose significant challenges for effective fusion. To address these issues, we propose a heterogeneous adaptive fusion network (HAFNet). First, the multi‐modality feature extractor leverages HRNet to retain the spatial details and local textures of optical images, while a lightweight SparseDense Transformer captures the global structural patterns of SAR data. Within M2FE, the multi‐branch integrated feature enhancement further strengthens the extracted features by emphasizing essential semantic information and suppressing noise. Second, the adaptive multi‐scale attention fusion module employs multi‐scale channel and spatial attention to capture critical information from both modalities, and incorporates a gating mechanism to adjust fusion weights, thereby dynamically exploiting cross‐modal complementarity. Finally, the U‐shaped framework with skip connections enhanced by a dynamic channel fusion module restores spatial resolution and improves the recognition accuracy of small‐scale land cover categories. To validate HAFNet, we conducted extensive comparisons with state‐of‐the‐art models on three public datasets with different resolutions. Experimental results demonstrate that HAFNet achieves improvements of approximately 1.2% in overall accuracy, 1.3% in the Kappa coefficient, 1.4% in F1‐score, and 1.6% in mean Intersection over Union, confirming its effectiveness in land use classification.
- Research Article
- 10.1111/phor.70032
- Oct 1, 2025
- The Photogrammetric Record
- Xiaodong Niu + 6 more
ABSTRACT In multimodal remote sensing image (MRSI) matching, nonlinear radiometric distortion (NRD), scale/geometric inconsistencies, and illumination changes often cause false or missed correspondences. We propose a method that couples an improved self‐similarity index map (SSIM) with absolute phase‐orientation features. First, a feature‐weighted aggregation jointly captures similarity and edge cues. We then fuse odd‐ and even‐symmetric Log‐Gabor filters to derive phase‐congruency–based orientation and scale cues, and combine them with Sobel gradients to form a scale‐adaptive absolute phase‐congruency orientation gradient. Finally, we construct a rank‐order self‐similarity map (SRSIM) to strengthen rotational invariance. We evaluate the method on representative MRSI datasets with translation, scale, rotation, and illumination differences, and compare against five mainstream algorithms. The results show superior robustness under radiometric distortion, contrast variation, orientation reversal, and abrupt phase‐extrema changes. Quantitatively, the average number of matched points (NCM) increases by more than 40%, the average matching success rate by 38%, and the average correct matching rate by 12.23%–31.56%, while the average root‐mean‐square error (RMSE) drops to 2.12 pixels. Overall, the approach markedly improves the accuracy and robustness of automatic multimodal remote sensing image matching.
- Research Article
- 10.1111/phor.70030
- Oct 1, 2025
- The Photogrammetric Record
- Rubens Antonio Leite Benevides + 2 more
ABSTRACT Terrestrial laser scanning (TLS) enables the rapid acquisition of three‐dimensional data in the form of 3D point clouds. However, it presents significant challenges, such as lengthy registration times between point cloud pairs and considerable trajectory drift due to widely spaced stations, which leads to inconsistent 3D reconstruction along the TLS path. This research proposes a pipeline that adapts the fast global registration (FGR) algorithm to efficiently handle TLS‐generated point clouds. The approach includes fine‐tuning FGR parameters and additional preprocessing steps, specifically normal orientation and keypoint extraction. The second contribution introduces a global refinement model (GRM) based on linear interpolation of dual quaternions. This closed‐form solution simultaneously refines rotations and translations in a closed circuit without iterative computations or matrix decomposition. Experimental evaluations on four TLS datasets indicate that the proposed pairwise registration with FGR achieves a 90% success rate across 86 point‐cloud pairs from multiple environments. Moreover, our drift‐correction model reduces closure errors by up to 41% in the dataset circuits, improving pose accuracy in closed trajectories with theoretical advantages that translate into efficient and fast implementation.
- Research Article
- 10.1111/phor.70031
- Oct 1, 2025
- The Photogrammetric Record
- Abdurahman Yasin Yiğit + 3 more
ABSTRACT Accurate and efficient 3D reconstruction of small‐scale objects remains challenging due to intricate geometries, limited imaging volumes, and sensitivity to acquisition conditions. This study presents a quantitative comparison between two close‐range photogrammetric acquisition methods: a conventional manual tripod setup and a custom‐built, automated turntable platform controlled by an Arduino microcontroller. Four geometrically distinct objects were reconstructed using both approaches and analyzed through a unified Structure‐from‐Motion (SfM) workflow. Dimensional accuracy was assessed using reference measurements obtained with a digital vernier caliper (±0.01 mm precision), while geometric fidelity was evaluated through Cloud‐to‐Cloud (C2C) surface deviation analysis. Results consistently favored the automated system. For instance, the used house object achieved a Root Mean Square Error (RMSE) of 0.18 cm with the turntable system versus 0.70 cm manually. The used jug, with complex occlusions, exhibited a C2C mean deviation of 0.411 cm in the manual method. The used jug showed a maximum deviation of 1.1 cm, while the used ceramic swan yielded the lowest mean error of 0.006 cm. In terms of efficiency, the automated platform reduced acquisition time by nearly 50%, improved repeatability, and minimized operator input. These findings underscore the potential of low‐cost, semi‐automated acquisition systems for improving the accuracy, reliability, and scalability of photogrammetric measurement workflows. The proposed system is especially well suited for technical education, low‐budget laboratory environments, and object‐scale documentation scenarios requiring consistent measurement standards.
- Journal Issue
- 10.1111/phor.v40.192
- Oct 1, 2025
- The Photogrammetric Record