- New
- Research Article
- 10.1186/s13007-026-01510-z
- Mar 4, 2026
- Plant methods
- Tomohiro Hatano + 6 more
Cortical microtubules (CMTs), one of the components of cytoskeleton, control the orientation and localization of newly deposited cellulose microfibrils in cell walls, and thereby determine the shape, size, and structure of plant cells. Imaging of CMTs in plant tissues is generally performed using fluorescently labeled specimens under an optical fluorescence or confocal laser scanning microscope. However, visualizing individual CMTs by optical microscopy is challenging because the observable range is limited to superficial tissue layers accessible to light penetration. In contrast, transmission electron microscopy offers high-resolution visualization of CMTs in plant cells but is restricted to slightly oblique ultrathin sections with an approximate thickness of 70-100nm. Given that field emission scanning electron microscopy (FE-SEM) provides both high spatial resolution and a broad field of view, the establishment of FE-SEM-based techniques for CMTs visualization is highly desirable. Herein, we introduce a technique for visualizing CMTs within untreated plant tissues by combining two cryo-fracture techniques, freeze-knife fracture and freeze-tensile fracture, with FE-SEM. We successfully observed the arrangement of CMTs in several plant specimens, including young branches of ginkgo (Ginkgo biloba), calli from the leaves of hybrid poplar (Populus sieboldii × P. grandidentata), and root tips of the adzuki bean (Vigna angularis). CMTs were visualized on the protoplasmic fracture face (PF) using both cryo-FE-SEM and conventional room-temperature FE-SEM. While cryo-FE-SEM was employed to maintain native structures, room-temperature FE-SEM proved to be a viable alternative when specialized cryo-equipment was unavailable. The combination of freeze-fracture techniques with FE-SEM enables the visualization of CMT arrangement in plant tissues at a high resolution and across a broad area without the need for staining or extraction of cellular components. This technique is applicable to various plant tissues and allows for detailed observation of CMTs within these tissues, providing valuable insights into the role of microtubules in the division and differentiation of plant cells.
- New
- Research Article
- 10.1186/s13007-026-01512-x
- Feb 26, 2026
- Plant methods
- Satinderpal Kaur + 2 more
Trichomes are hairlike protuberances in plants that deter herbivores using both physical and chemical means, with pre- and post-ingestive effects. In addition, trichomes also serve as a protective layer, reducing transpiration loss, thereby improving abiotic stress tolerance. A common method for assessing the role of trichomes in defense is to extract and infuse trichomes into the artificial diet, and then examine the feeding, growth, and developmental parameters of herbivores. Although methods such as brushing, using dry ice, and shaving with a razor, have been commonly used, there is no consensus on the best and most efficient method for trichome extraction. In the current study, to optimize the ideal method to extract trichomes, we tested three common methods with some modifications, such as flash freezing and brushing, shaving using a scalpel, and vortexing followed by freezing. Here, we used representative species from both eudicots and monocots with significant trichome diversity. For eudicots, we selected three cultivated plant species: tomato, sunflower, and Mexican squash, and two wild plant species: wild sunflower and wild squash, and for monocots, we used the model plant, rice. The efficiency of each method was examined using a combination of light and scanning electron microscopy. The results suggest that freezing followed by brushing is the most efficient and ideal method for extracting trichomes and can be recommended for eudicots. However, none of the evaluated methods were effective for the monocot rice.
- New
- Research Article
- 10.1186/s13007-026-01502-z
- Feb 24, 2026
- Plant methods
- Han Palmers + 2 more
Pollen are the primary carrier of male genetic information during sexual reproduction. In Angiosperms, pollen are produced in the anthers following the process of meiosis, during which ploidy halving and genetic recombination occurs, mixing the parental genomes and creating allelic variation. Therefore, the genetic configuration of pollen is of great interest for the study of meiotic recombination and cell division as well as genetic variation in general, having applications in plant genetics, breeding, and evolution. Single pollen genotyping recently emerged as a powerful tool for the genetic characterization of the male germline, allowing a more detailed analysis of genetic diversity and reproductive behavior down to the level of the individual gamete. However, current methods for single pollen genotyping rely on manual sorting and use of expensive whole genome amplification technology. We developed a simple and straightforward pipeline for single pollen genotyping in apple, an economically important fruit crop. In brief, this method includes filter bursting to efficiently release pollen nuclei followed by isolation of single nuclei via fluorescence-activated cell sorting. Subsequently, single nuclei are directly PCR genotyped, resulting in up to 85% amplification success without the use of whole genome amplification. Using this optimized pipeline, we developed a multiplex genotyping toolbox for apple which can be used to study meiotic recombination, meiotic restitution, and cross-over interference. Overall, this new single pollen genotyping procedure for apple enables high-throughput genotyping of large pollen populations in a single cell manner, facilitating reproductive research in apple as a non-model crop species with potential for translation to other fruit species.
- New
- Research Article
- 10.1186/s13007-026-01509-6
- Feb 22, 2026
- Plant methods
- Elena Kozgunova
CRISPR/Cas9-based genome editing in the model bryophyte Physcomitrium patens (commonly known as Physcomitrella) is widely used for gene knockout via small insertions or deletions (indels). In this study, we developed an efficient dual-gRNA system capable of producing large, targeted deletions across multiple genes, enabling straightforward detection by gel electrophoresis and simultaneous multi-gene knockout. We first compared the efficiency of polycistronic tRNA-gRNA arrays to conventional gRNA constructs expressed under individual promoters, using the checkpoint protein gene MAD2 as a target. We found that a polycistronic construct doubled the frequency of large gene deletions compared to a conventional design. We then demonstrated that simultaneous deletion of two or four genes, targeting the katanin and TPX2 gene families, respectively, can be achieved in a single transformation event. The polycistronic system also increased deletion frequencies in the multiplex context, with up to 42% efficiency for individual genes and successful recovery of quadruple mutants. As a drawback, we confirmed that deletion efficiency varied substantially among individual gRNA pairs, indicating that gRNA design remains a critical factor in multiplex editing. This study establishes a fast and efficient framework for simultaneous removal of multiple genes in Physcomitrella, providing a practical alternative to homologous recombination-based methods for functional and applied studies.
- New
- Research Article
- 10.1186/s13007-026-01508-7
- Feb 22, 2026
- Plant methods
- Fernanda Leiva + 2 more
Apple scab (AS), caused by the fungal pathogen Venturia inaequalis, is a major disease of apple that manifests as lesions on leaves and fruits. The disease compromises fruit quality and yield, leading to substantial economic losses. Traditional AS assessment relies on visual scoring, which is labor-intensive, subjective, and poorly reproducible. This study proposes a deep learning-based framework to overcome these limitations and to enable an accurate, scalable AS phenotyping approach. Deep learning techniques were employed for the object detection and segmentation of AS symptoms in apple fruits. A two-stage fine-tuning process was applied to color images collected under orchard and laboratory conditions using the YOLO foundation model (YOLO11). The model was first trained to detect healthy apple fruits (Model 1) and subsequently refined to segment AS lesions (Model 2) using high-resolution imagery (864 × 864 pixels). Model 1 (Fruit Detection) achieved 0.98 precision, 0.95 recall, and 0.94 mAP50. Model 2 (Lesion Segmentation) achieved 0.64 precision, 0.75 recall, and 0.75 mAP50. The framework supports real-time processing of images and video. Despite challenges such as variable lighting and symptom heterogeneity, the use of high-resolution training data improved the segmentation accuracy (mAP50-95) of fine-scale lesions by over 50% compared to the previous YOLO architecture. These results demonstrate that the proposed deep learning-based approach provides a reliable pipeline for automated AS phenotyping. By improving precision and efficiency in both controlled and field environments, the model enhances apple grading assessments and accelerates breeding efforts to identify AS-resistant genotypes. Furthermore, this work establishes a solid foundation for broader applications in real-time plant disease monitoring and future integration of additional apple diseases.
- New
- Research Article
- 10.1186/s13007-026-01507-8
- Feb 16, 2026
- Plant methods
- Sasa Tian + 6 more
Maize is susceptible to various diseases throughout its growth cycle, which can significantly reduce yields. The accurate identification of maize diseases with similar symptomatic manifestations is particularly challenging under field conditions due to heterogeneous lighting and variable weather conditions. This paper proposes a novel detection model named SCFM-DETR, which is based on an improved Real-Time DEtection TRansformer (RT-DETR) to achieve robust identification of maize diseases in complex environments. SimAM-StarNet is employed as the backbone for feature extraction in this model, reducing the number of parameters and improving multiscale feature fusion, thereby diminishing the impact of background noise. Furthermore, the original RepC3 module is replaced with a newly designed CGLU-FasterBlock-MANet (CFM) module, which enhances adaptive feature fusion for finer discriminative capability. The experimental results demonstrate that the SCFM-DETR model achieves an average precision of 96.7% and a recall of 95.8% on a maize disease dataset, exceeding the corresponding metrics of the baseline RT-DETR-R18 model by 3.1% and 6.0%. Additionally, the model reduces the number of parameters and computational load by 47% and 49%, respectively, making it highly suitable for deployment in computationally limited agricultural settings. This work offers a high-accuracy, lightweight framework that facilitates intelligent crop disease monitoring and supports the advancement of smart agriculture.
- Research Article
- 10.1186/s13007-026-01506-9
- Feb 8, 2026
- Plant methods
- Jianping Liu + 9 more
Fine-grained pest recognition is a key component of intelligent pest monitoring and precise control, and it is important for ensuring agricultural production safety. This paper proposes a generative self-supervised learning-based pest recognition model, termed PAFT-WPest, to address challenges in fine-grained pest recognition, including small inter-class differences, large intra-class variations, complex background interference, and limited annotated data. The model employs partial-convolution spatial attention to focus on pest regions while suppressing redundant background information. Channel semantic selection and frequency-domain modeling are introduced to enhance the model's ability to perceive subtle detail differences. In addition, the model captures dependency relationships among different parts of the pest body to improve the modeling of global structure and semantic information. Furthermore, two fine-grained wolfberry pest datasets that distinguish pest growth stages and damage locations are constructed, and a continual pre-training strategy is adopted to enhance cross-scenario adaptability. Experimental results show that PAFT-WPest achieves accuracies of 76.83%, 91.53%, 98.70%, 79.27%, and 97.34% on the public pest datasets IP102, Butterfly-200, WPIT9K, Rice Pest, and Jute Pest, respectively, and accuracies of 97.82% and 94.69% on the self-built wolfberry pest datasets WP45 and WP11. These results indicate that the proposed model can improve fine-grained pest recognition performance under complex backgrounds, providing a feasible approach for agricultural pest monitoring and classification.
- Research Article
- 10.1186/s13007-026-01504-x
- Feb 8, 2026
- Plant methods
- Shaoqi Fan + 11 more
Crop phenotyping of important agronomic traits in field conditions at single-plant resolution has long been a major bottleneck in both genetic analysis (e.g. large-scale association/linkage analysis) and breeding applications (e.g. genomic prediction/selection). Despite growing interest, ultra-affordable, high-throughput and accurate phenotyping tools for maize ears remain limited. Here, we developed OpenEar, an open source, low-cost phenotyping system that combines a DIY maize ear imaging platform with a deep learning-based end-to-end phenotypic data extraction pipeline. The imaging platform is composed of 3D-printed parts and electronics components easily available from local retailers to perform high-quality 360° surface scanning of maize ears. Our pipeline first employs CNN-based models to identify normally-developed ears suitable for phenotyping, followed by reliable segmentation of ears and ear surface projection images by YOLOv11-based models, from which ten key traits are subsequently extracted. OpenEar demonstrates reliable agreement with manual measurements across a diverse set of ear- and kernel-related traits, including ear length (R2 = 0.972), ear diameter (R2 = 0.905), ear volume (R2 = 0.976), ear weight (R2 = 0.878), kernel number (R2 = 0.98), kernel row number (R2 = 0.888), kernel number per row (R2 = 0.852), kernel thickness (R2 = 0.705), kernel width (R2 = 0.515), and thousand kernel weight (R2 = 0.605). A user-friendly graphical interface is developed for manual inspection of ears after computer annotation. Manually annotated ear videos and images are publicly released as a resource for the crop phenomics community. Our study highlights the potential of DIY-based low-cost solutions to make phenotyping more accessible in crop genetic analysis and breeding.
- Research Article
- 10.1186/s13007-026-01505-w
- Feb 7, 2026
- Plant methods
- Xulong Huang + 7 more
Precise, non-destructive phenotyping of saffron during vegetative growth is critical for optimizing corm yield and accelerating breeding programs, yet systematic 3D measurements have remained elusive due to extreme morphological challenges: ultra-narrow leaves, severe mutual occlusion, and prostrate growth architecture. Traditional single-view imaging systems fail to resolve individual leaves under such conditions, limiting phenotypic analysis to whole-canopy descriptors. Here, we developed a specialized organ-level 3D phenotyping workflow specifically designed for narrow, overlapping leaves using a low-cost dual-camera rotary acquisition system integrated with open-source Structure-from-Motion Multi-View Stereo (SfM-MVS) reconstruction. The dual-perspective strategy reduces occlusion-induced errors by 75% compared to single-view approaches, enabling robust organ-level segmentation via a multi-constraint clustering strategy. Automated measurements of leaf length and width across five developmental stages demonstrate exceptional agreement with manual references (R2 > 0.94, MAPE < 6%), achieving accuracy benchmarks established for broad-leaved crops using commercial-grade hardware at 100 × lower cost. Systematic voxel sensitivity analysis across nine scales identified optimal preprocessing parameters (2cm voxel size) balancing measurement precision with computational efficiency, addressing a critical reproducibility gap in plant phenotyping. Exploratory longitudinal tracking revealed that above-ground biomass was correlated with final corm yield (r = 0.68, P < 0.001), with mid-vegetative canopy volume also showing strong correlation (r = 0.52, P < 0.01), suggesting potential resource allocation trade-offs between vegetative expansion and storage organ development. This work demonstrates that organ-level 3D phenotyping of narrow, overlapping leaves is achievable using low-cost imaging hardware and transparent methodological workflows. Complete documentation of algorithmic parameters and hardware specifications enables direct replication and adaptation to other narrow-leaved crops (wheat, rice, onion, leek), democratizing access to high-throughput phenotyping in resource-limited settings. The workflow advances plant phenomics by demonstrating that methodological transparency and cost-effectiveness need not compromise measurement precision, opening new avenues for phenotype-to-genotype mapping and predictive breeding in underutilized crops.
- Research Article
- 10.1186/s13007-026-01500-1
- Feb 3, 2026
- Plant methods
- Ruiqing Pan + 8 more