Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Export
Sort by: Relevance
  • New
  • Research Article
  • 10.1016/j.plaphe.2026.100202
VE-MLM: A Variable Endmember-based Multilinear Mixing Framework for Crop FAPAR Estimation Using UAV Multispectral Imagery
  • Apr 1, 2026
  • Plant Phenomics
  • Ningge Yuan + 9 more

  • Research Article
  • 10.1016/j.plaphe.2026.100173
UAV-based spatial sampling bridges ground measurements and satellite data for multi-scale estimation of sugar beet aboveground biomass
  • Mar 1, 2026
  • Plant Phenomics
  • Qing Wang + 11 more

  • Research Article
  • Cite Count Icon 1
  • 10.1016/j.plaphe.2025.100108
MaizeField3D: A curated 3D point cloud and procedural model dataset of field-grown maize from a diversity panel
  • Mar 1, 2026
  • Plant Phenomics
  • Elvis Kimara + 8 more

The development of artificial intelligence (AI) and machine learning (ML) based tools for 3D phenotyping, especially for maize, has been limited due to the lack of large and diverse 3D datasets. 2D image datasets fail to capture essential structural details such as leaf architecture, plant volume, and spatial arrangements that 3D data provide. To address this limitation, we present MaizeField3D (website), a curated dataset of 3D point clouds of field-grown maize plants from a diverse genetic panel, designed to be AI-ready for advancing agricultural research. Our dataset includes 1045 high-quality point clouds of field-grown maize collected using a terrestrial laser scanner (TLS). Point clouds of 520 plants from this dataset were segmented and annotated using a graph-based segmentation method to isolate individual leaves and stalks, ensuring consistent labeling across all samples. This labeled data was then used for fitting procedural models that provide a structured parametric representation of the maize plants. The leaves of the maize plants in the procedural models are represented using Non-Uniform Rational B-Spline (NURBS) surfaces that were generated using a two-step optimization process combining gradient-free and gradient-based methods. We conducted rigorous manual quality control on all datasets, correcting errors in segmentation, ensuring accurate leaf ordering, and validating metadata annotations. The dataset also includes metadata detailing plant morphology and quality, alongside multi-resolution subsampled point cloud data (100k, 50k, 10k points), which can be readily used for different downstream computational tasks. MaizeField3D will serve as a comprehensive foundational dataset for AI-driven phenotyping, plant structural analysis, and 3D applications in agricultural research.

  • Research Article
  • 10.1016/j.plaphe.2026.100182
Leaf-DETR: Progressive adaptive network with lower matching cost for dense leaves detection
  • Mar 1, 2026
  • Plant Phenomics
  • Xiaoyang Wan + 10 more

Leaves are central indicators of photosynthesis and plant growth status, and their precise monitoring is crucial for smart agriculture. Dense leaf detection, as a foundation for leaf morphology analysis, must address challenges such as occlusion and overlap, directly enabling key tasks including phenotypic trait extraction, disease identification, and yield estimation.Leaves are the most important plant organs, and monitoring leaves is a crucial aspect of crop surveillance. Dense leaf detection plays an important role as a fundamental technology for leaf monitoring. Existing dense leaf detection methods rely on traditional modular detectors and generic feature extraction, lacking designs tailored to real-world dense leaf scenarios.The methods for dense leaf detection generally use traditional modular detectors and general feature extraction techniques, without designing methods specifically for dense leaves in reality. In detail, in complex field scenarios, it still faces challenges like incomplete individual feature extraction due to high leaf overlap and difficult network convergence caused by excessive leaf density. To this end, we propose the Leaf-DETR framework, which effectively addresses these challenges through the Progressive Feature Fusion Pyramid Network (P-FPN) and the Crowded Query Refinement Strategy (CQR). First, we construct the largest dense leaf detection dataset to date, containing 1,696 images and 85,375 annotation boxes. Second, P-FPN alleviates the feature confusion problem of overlapping leaves through the multi-stage fusion of features and the Adaptive Feature Aggregation module (AFA), enhancing the interaction between low-level details and high-level semantics. Third, the CQR strategy significantly reduces the matching cost of crowded candidate boxes and improves the network convergence efficiency by culling a crowded query method and introducing a one-to-many matching mechanism. Finally, experimental results show that Leaf-DETR improves mAP@50 by 1% and AR@300 by 1.4% over the baseline model on our self-constructed dataset, outperforming existing detection methods. Furthermore, the model exhibits extremely fast training convergence and demonstrates strong generalization capability on both field-collected monitoring images and other staple crops, fully highlighting its practical value in complex agricultural scenarios. Finally, experiments show that Leaf-DETR outperforms existing detection methods on the self-built dataset and demonstrates good performance generalization in monitoring collected images, as well as for other staple food crops, which verifies its practicality in complex agricultural scenarios. The code and detailed information are available at http://leafdetr.samlab.cn .

  • Open Access Icon
  • Research Article
  • 10.1016/j.plaphe.2026.100171
Species-specific tree structural parameters extraction via UAV RGB-LiDAR data and multimodal instance segmentation
  • Mar 1, 2026
  • Plant Phenomics
  • Jiansen Wang + 5 more

Complex forest structures, interspecies similarities, and intraspecies variations constrain the acquisition of species-specific tree phenotypes. This study develops a scalable framework for extracting species-specific structural parameters at the individual tree level. Leveraging ultrahigh-resolution UAV-based RGB and LiDAR data, we propose a novel self-attention-guided spectral–structural multimodal fusion transformer (SAMFormer). Key components include: (1) an adaptive feature enhancement module (AFEM) that employs spatial and channel attention to selectively highlight canopy features while suppressing background noise; (2) a cross-modal fusion module (CMFM) that captures intra- and inter-modal dependencies through the cross-attention mechanism, generating highly discriminative representations. SAMFormer achieves fine-grained tree identification in complex forest environments, relieving issues of blurred canopy segmentation and species misclassification. K-fold cross-validation demonstrates robust performance across diverse scenes, achieving 86.3% F1-score and 88.0% mAP@0.5 , significantly outperforming single-modal inputs and mainstream instance segmentation models. We generate large-scale species-specific maps of tree structural parameters based on SAMFormer outputs, allometric equations, and a sliding window strategy. Subsequently, these parameters are utilized to map carbon stock. Ecological analysis reveals a coupling relationship between tree competition and structural parameters/carbon stock: competition intensity exhibits a significant negative correlation with both (p<0.001). Trees adapt by adjusting growth strategies (e.g., reducing radial growth and limiting canopy expansion), ultimately lowering biomass accumulation and carbon stock. Additionally, species mixing enhances carbon stock, as mixed forests store more carbon than monocultures. This work provides a high-throughput, non-destructive pathway for forest phenotyping, supporting precision forestry and climate-adaptive management practices.

  • Open Access Icon
  • Research Article
  • 10.1016/j.plaphe.2026.100172
Cross-modal data integration and spectral optimization for enhanced individual apple tree canopy nitrogen concentration estimation using UAV remote sensing
  • Mar 1, 2026
  • Plant Phenomics
  • Bo Chen + 4 more

Precision management in high-density orchards requires individual-tree, nondestructive monitoring of canopy nitrogen concentration (CNC), but hyperspectral applications are limited by two factors: unmodeled vertical stratification of CNC within 3D canopies and mixed-pixel effects near canopy boundaries. We develop a cross-modal framework that co-registers RGB-derived 3D point clouds with hyperspectral orthomosaics, enabling individual-tree localization in dense orchards. With this framework, we quantified layer-specific nitrogen-spectral relationships and assessed mixed-pixel effects across canopy positions. Stratified sampling, continuous wavelet transform (CWT), and partial least squares regression (PLSR) with variable importance in projection (VIP)–based band selection were used for spectral optimization, and K-means was applied to isolate representative canopy pixels. Field experiments over two consecutive years (2023–2024) revealed consistent CNC gradients, with the lower canopy exceeding the upper by 0.5–9.5% across fertilization treatments. CWT-2 delivered the most accurate and robust performance across years. VIP-PLSR indicated layer-dependent CNC-informative wavelengths spanning the visible, red-edge, and near-infrared regions, with scale-dependent cross-layer overlap after CWT. Pixel clustering revealed distinct spatial structure: canopy-interior pixels exhibited characteristic vegetation spectra and achieved R 2 val of 0.69–0.76, substantially outperforming boundary-affected pixels with R 2 val of 0.48–0.57. These results demonstrate that coupling spectral feature optimization with layer-specific modeling and clustering-based pixel screening improves the accuracy of tree-level CNC estimation in complex canopies. The proposed framework provides a mechanistic and operational basis for robust biochemical retrieval in structurally complex orchard systems.

  • Open Access Icon
  • Research Article
  • 10.1016/j.plaphe.2025.100148
PlantSpecLab: A comprehensive open-source platform for high-throughput plant spectral data processing and phenotypic modeling
  • Mar 1, 2026
  • Plant Phenomics
  • Ruoyu Di + 9 more

High-throughput plant phenotyping with hyperspectral imaging (HSI) is pivotal for accelerating crop improvement to address global food security. Adoption is limited by a data-processing bottleneck, forcing a trade-off between costly, inflexible commercial software and programming-intensive open-source libraries. To overcome this barrier, we developed PlantSpecLab, an open-source, no-code platform that unifies the HSI workflow from image processing to modeling within a single interactive interface. The platform introduces spectrally guided segmentation strategies (Range Averaging, Difference Enhancement) and a spectral Fractional-Order Differencing (FOD) preprocessor to enhance extraction of subtle, physiologically relevant features. Across diverse in-house and public datasets, FOD-preprocessed spectra improved model performance over conventional pipelines, yielding 87.35% accuracy for tomato maturity and R 2 = 0.878 for fruit firmness. In cross-software benchmarks, PlantSpecLab matched the accuracy of ENVI and code-based Python pipelines while reducing end-to-end workflow time by >90% (>80 min to ∼8 min). PlantSpecLab provides a transparent, efficient analytical environment that lowers the technical barrier to HSI analysis. This enables researchers to prioritize biological interpretation while minimizing computational overhead.

  • Open Access Icon
  • Research Article
  • 10.1016/j.plaphe.2025.100151
Img2Variety: Image-based intraspecific varieties identification across the whole growth period
  • Mar 1, 2026
  • Plant Phenomics
  • Yongrong Cao + 6 more

Accurate identification of crop varieties across growth stages is fundamental for material verification and trial management, providing a reliable basis for subsequent performance evaluation and elite accession selection in breeding programs. However, it remains challenging to differentiate intraspecific varieties due to subtle morphological variations among closely related accessions. Here, we present Img2Variety, a novel convolutional neural network (CNN)-based framework for crop accession identification from whole-plant images. Img2Variety builds on transfer learning by fine-tuning pre-trained CNNs. It is designed to adapt to plant datasets with a large number of accessions but limited samples per accession, thereby improving generalization across diverse accessions. To enrich feature diversity, we propose a novel growth stage and multi-view mixed augmentation (GMMA) strategy that leverages variation in viewing angles and developmental stages to promote feature learning. We also employ an adaptive cross-entropy (ACE) loss that emphasizes misclassified samples during training to improve identification performance. Img2Variety was evaluated using six CNN backbones on two datasets: one comprising 11,170 RGB images of 93 rice ( Oryza sativa ) accessions throughout the entire growth period, and another containing 5,599 RGB images of 224 maize ( Zea mays ) inbred lines across nine growth stages. Img2Variety achieved a peak accuracy of 88.66% for rice and 79.95% for maize, with an average relative improvement of 86.30% over six baseline methods on the maize dataset. Notably, it exceeded 80.22% accuracy for pre-heading rice and the maize tenth-leaf stage. These results highlight Img2Variety’s effectiveness in crop variety identification and its potential for early-stage crop management. A web-based implementation is freely accessible at https://ngdc.cncb.ac.cn/opia/img2variety .

  • Research Article
  • Cite Count Icon 4
  • 10.1016/j.plaphe.2025.100132
PlantIF: Multimodal semantic interactive fusion via graph learning for plant disease diagnosis
  • Mar 1, 2026
  • Plant Phenomics
  • Xingcai Wu + 9 more

Plant diseases remain a major constraint on crop productivity, requiring timely and accurate diagnostic approaches to secure agricultural yields. While existing automated diagnosis methods primarily rely on image data and achieve notable results, their performance often declines in complex field environments with noise and interference. Multimodal learning provides a promising solution by integrating complementary cues from various data sources. However, the heterogeneity between plant phenotypes and other modalities, such as textual descriptions, poses a significant challenge for effective fusion. To address this issue, we propose PlantIF, a multimodal feature interactive fusion model for plant disease diagnosis based on graph learning. PlantIF comprises three key components: image and text feature extractors, semantic space encoders, and a multimodal feature fusion module. Specifically, we employ pre-trained image and text feature extractors to extract visual and textual features enriched with prior knowledge of plant diseases. Semantic space encoders then map these features into both shared and modality-specific spaces, enabling the capture of cross-modal and unique semantic information. To enhance context understanding, we design a multimodal feature fusion module to process and fuse different modal semantic information, and then extract the spatial dependency between plant phenotype and text semantics through the self-attention graph convolution network. We evaluate PlantIF on a multimodal plant disease dataset with 205,007 images and 410,014 texts, achieving 96.95% accuracy, 1.49% higher than existing models. These results demonstrate the potential of multimodal learning in plant disease diagnosis and highlight PlantIF’s value in precision agriculture. Codes are available at https://github.com/GZU-SAMLab/PlantIF .

  • Research Article
  • 10.1016/j.plaphe.2025.100163
Combining RGB imaging with a two-stage deep learning method to reveal genetic variation of wheat sprouting traits
  • Mar 1, 2026
  • Plant Phenomics
  • Bingxi Qin + 20 more