A lightweight deep learning model for synchronized crop stem detection and row segmentation at the seedling stage: Exploring their contribution to agricultural navigation line extraction
A lightweight deep learning model for synchronized crop stem detection and row segmentation at the seedling stage: Exploring their contribution to agricultural navigation line extraction
- Research Article
22
- 10.1016/j.compag.2023.107964
- Jun 28, 2023
- Computers and Electronics in Agriculture
Recognition method of maize crop rows at the seedling stage based on MS-ERFNet model
- Research Article
2
- 10.1155/2022/1623462
- Jun 25, 2022
- Computational Intelligence and Neuroscience
This exploration intends to remove chloride ions in production and life, enhance buildings' durability, and protect the natural environment from pollution. The current dechlorination technology is discussed based on the relevant theories, such as the lightweight deep learning (DL) model and chloride ion characteristics. Next, data statistics and comparative analysis methods are used to study the adsorption and desorption performance of dechlorination adsorbents. Finally, the lightweight DL model is introduced into the chloride diffusion prediction experiment of slag powder and fly ash concrete. The results show that in the study of dechlorination adsorption performance, the chloride ion concentration decreases gradually with the extension of adsorption time. However, with the increasing temperature, the chloride ion removal rate is increasing. The removal rate of chloride ions in water can decrease slowly with the increase of adsorbent. Therefore, selecting the 2 mol/L sodium hydroxide as the alkali concentration for adsorbent regeneration is the most appropriate. Besides, the regeneration performance of the adsorbent gradually declines with the increase of sodium chloride concentration in the solution. The lightweight DL model is applied to the chloride diffusion prediction experiment of slag powder and fly ash concrete. It is found that when the curing age is selected at 18 days, 90 days, and 180 days, respectively, the error between the lightweight DL model and the experimental results is about 0.2. It shows that the lightweight DL model is feasible for predicting the diffusion of chloride ions. Therefore, this exploration designs and studies the dechlorination experiment based on the lightweight DL model, which provides a new theoretical basis and optimization direction for removing chloride ions in the future industry.
- Research Article
3
- 10.1155/2022/4670523
- Jun 11, 2022
- Computational Intelligence and Neuroscience
The purpose is to improve the training effect of physical education (PE) based on the teaching concept of ideological and political courses. The research is supported by the lightweight deep learning (DL) model of the Internet of things (IoT). Through intelligent recognition and classification of human action and images, it discusses the PE and training scheme based on the lightweight DL model. In addition, by the optimization of the accelerated compression algorithm and the evaluation of the PE and training effect of the Openpose algorithm, an optimization model of the PE and training effect has been successfully established. The research data results indicate that after 120 iterations of the model, the system recognition accuracy of the convolutional neural network (CNN) algorithm can only be improved to about 75%, while the recognition accuracy of the Openpose algorithm can reach about 85%. Compared with the CNN algorithm under the same number of iterations, the recognition accuracy can be improved by 9.8%. In addition, when the number of nodes in the network layer is 60, the system delay time of the proposed Openpose algorithm is smaller. At this time, the system delay of the algorithm is only 10.8s. Compared with the CNN algorithm under the same conditions, the proposed algorithm can save at least 1.2s in system delay time. The advantage of the algorithm is that it can improve the efficiency of physical training and teaching, and this research has important reference significance for the digital and intelligent development of the teaching mode of PE.
- Research Article
3
- 10.1155/2022/9066648
- Jun 13, 2022
- Computational Intelligence and Neuroscience
On the basis of scene visual understanding technology, the research aims to further improve the classification efficiency and classification accuracy of art design scenes. The lightweight deep learning (DL) model based on big data is used as the main method to achieve real-time detection and recognition of multiple targets and classification of the multilabel scene. This research introduces the related foundations of the DL network and the lightweight object detection involved. The data for a multilabel scene classifier are constructed and the design of the convolutional neural network (CNN) model is described. On public datasets, the effectiveness of the lightweight object detection algorithm is verified to ensure its feasibility in the classification of actual scenes. The simulation results indicate that compared with the YOLOv3-Tiny model, the improved IRDA-YOLOv3 model reduces the number of parameters by 56.2%, the amount of computation by 46.3%, and the forward computation time of the network by 0.2 ms. It means that the IRDA-YOLOv3 network obtained after the improvement can realize the lightweight of the network. In the scene classification of complex traffic roads, the classification model of the multilabel scene can predict all kinds of semantic information of a single image and the classification accuracy for the four scenes is more than 90%. In summary, the discussed classification method based on the lightweight DL model is suitable for complex practical scenes. The constructed lightweight network improves the representational ability of the network and has certain research value for scene classification problems.
- Research Article
11
- 10.3390/agriculture14091446
- Aug 24, 2024
- Agriculture
In precision agriculture, after vegetable transplanters plant the seedlings, field management during the seedling stage is necessary to optimize the vegetable yield. Accurately identifying and extracting the centerlines of crop rows during the seedling stage is crucial for achieving the autonomous navigation of robots. However, the transplanted ridges often experience missing seedling rows. Additionally, due to the limited computational resources of field agricultural robots, a more lightweight navigation line fitting algorithm is required. To address these issues, this study focuses on mid-to-high ridges planted with double-row vegetables and develops a seedling band-based navigation line extraction model, a Seedling Navigation Convolutional Neural Network (SN-CNN). Firstly, we proposed the C2f_UIB module, which effectively reduces redundant computations by integrating Network Architecture Search (NAS) technologies, thus improving the model’s efficiency. Additionally, the model incorporates the Simplified Attention Mechanism (SimAM) in the neck section, enhancing the focus on hard-to-recognize samples. The experimental results demonstrate that the proposed SN-CNN model outperforms YOLOv5s, YOLOv7-tiny, YOLOv8n, and YOLOv8s in terms of the model parameters and accuracy. The SN-CNN model has a parameter count of only 2.37 M and achieves an mAP@0.5 of 94.6%. Compared to the baseline model, the parameter count is reduced by 28.4%, and the accuracy is improved by 2%. Finally, for practical deployment, the SN-CNN algorithm was implemented on the NVIDIA Jetson AGX Xavier, an embedded computing platform, to evaluate its real-time performance in navigation line fitting. We compared two fitting methods: Random Sample Consensus (RANSAC) and least squares (LS), using 100 images (50 test images and 50 field-collected images) to assess the accuracy and processing speed. The RANSAC method achieved a root mean square error (RMSE) of 5.7 pixels and a processing time of 25 milliseconds per image, demonstrating a superior fitting accuracy, while meeting the real-time requirements for navigation line detection. This performance highlights the potential of the SN-CNN model as an effective solution for autonomous navigation in field cross-ridge walking robots.
- Research Article
3
- 10.48084/etasr.7777
- Aug 2, 2024
- Engineering, Technology & Applied Science Research
Traditional intrusion detection systems rely on known patterns and irregularities. This study proposes an approach to reinforce security measures on QR codes used for marketing and identification. The former investigates the use of a lightweight Deep Learning (DL) model to detect cyberattacks embedded in QR codes. A model that classifies QR codes into three categories: normal, phishing, and malware, is proposed. The model achieves high precision and F1 scores for normal and phishing codes (Class 0 and 1), indicating accurate identification. However, the model's recall for malware (Class 2) is lower, suggesting potential missed detections in this category. This stresses the need for further exploration of techniques to improve the detection of malware QR codes. Despite the particular limitation, the overall accuracy of the model remains impressive at 99%, demonstrating its effectiveness in distinguishing normal and phishing codes from potentially malicious ones.
- Research Article
16
- 10.1186/s13007-022-00913-y
- Jul 2, 2022
- Plant Methods
BackgroundThe application of autopilot technology is conductive to achieving path planning navigation and liberating labor productivity. In addition, the self-driving vehicles can drive according to the growth state of crops to ensure the accuracy of spraying and pesticide effect. Navigation line detection is the core technology of self-driving technology, which plays a more important role in the development of Chinese intelligent agriculture. The general algorithms for seedling line extraction in the agricultural fields are for large seedling crops. At present, scholars focus more on how to reduce the impact of crop row adhesion on extraction of crop rows. However, for seedling crops, especially double-row sown seedling crops, the navigation lines cannot be extracted very effectively due to the lack of plants or the interference of rut marks caused by wheel pressure on seedlings. To solve these problems, this paper proposed an algorithm that combined edge detection and OTSU to determine the seedling column contours of two narrow rows for cotton crops sown in wide and narrow rows. Furthermore, the least squares were used to fit the navigation line where the gap between two narrow rows of cotton was located, which could be well adapted to missing seedlings and rutted print interference.ResultsThe algorithm was developed using images of cotton at the seedling stage. Apart from that, the accuracy of route detection was tested under different lighting conditions and in maize and soybean at the seedling stage. According to the research results, the accuracy of the line of sight for seedling cotton was 99.2%, with an average processing time of 6.63 ms per frame; the accuracy of the line of sight for seedling corn was 98.1%, with an average processing time of 6.97 ms per frame; the accuracy of the line of sight for seedling soybean was 98.4%, with an average processing time of 6.72 ms per frame. In addition, the standard deviation of lateral deviation is 2 cm, and the standard deviation of heading deviation is 0.57 deg.ConclusionThe proposed rows detection algorithm could achieve state-of-the-art performance. Besides, this method could ensure the normal spraying speed by adapting to different shadow interference and the randomness of crop row growth. In terms of the applications, it could be used as a reference for the navigation line fitting of other growing crops in complex environments disturbed by shadow.
- Research Article
9
- 10.3390/diagnostics14192225
- Oct 5, 2024
- Diagnostics (Basel, Switzerland)
The reproductive age of women is particularly vulnerable to the effects of polycystic ovarian syndrome (PCOS). High levels of testosterone and other male hormones are frequent contributors to PCOS. It is believed that miscarriages and ovulation problems are majorly caused by PCOS. A recent study found that 31.3% of Asian women have been afflicted with PCOS. Healing women with life-threatening disorders associated with PCOS requires more research. In prior research, methods have involved autonomously classified PCOS using a number of different machine learning techniques. ML-based approaches involve hand-crafted feature extraction and suffer from low performance issues, which cannot be ignored for the accurate prediction and identification of PCOS. Hence, predicting PCOS using cutting-edge deep learning methods for automated feature engineering with better performance is the prime focus of this study. The proposed method suggests three lightweight (LSTM-based, CNN-based, and CNN-LSTM-based) deep learning models, incorporating SMOTE for dataset balancing to obtain a valid performance. The proposed three models tend to offer an accuracy of 92.04%, 96.59%, and 94.31%, an ROC-AUC of 92.0%, 96.6%, and 94.3%, the number of parameters of 6689, 297, and 13285, and a training time of 67.27 s, 10.02 s, and 18.51 s, respectively. In addition, the DeLong test is also performed to compare AUCs to assess the statistical significance of all three models. Among all three models, the SMOTE + CNN models performs better in terms of accuracy, precision, recall, AUC, number of parameters, training time, DeLong's p-value over the other. Moreover, a performance comparison is also carried out with other state-of-the-art PCOS detection studies and methods, which validates the better performance of the proposed model. Thus, the proposed model provides the greatest performance, which can lead to a reduction in the number of failed pregnancies and help in finding PCOS in the early stages.
- Research Article
- 10.52783/jes.820
- Mar 28, 2024
- Journal of Electrical Systems
One of the most significant staple crops in the world is rice. Rice seedlings are particularly susceptible to salt stress during the seedling stage, which can negatively affect crop quality and yield. Traditional approaches for assessing the susceptibility of rice crops to salt stress during the seedling stage are deemed inadequate and time consuming. The study emphasizes the necessity of employing a deep learning model instead of traditional methods to identify and classify salinity stress in rice seedlings using field images. To predict salinity stress in rice crops, this research examines the significance of image processing methods employed in deep learning models. To enhance the clarity and visual representation of salinity-induced stress symptoms, we explore several image enhancement techniques, such as noise reduction, contrast augmentation, and image normalization. To further capture and quantify the distinct visual features related to salinity stress, feature extraction techniques such as texture analysis, shape analysis, and color-based segmentation are used. We employ a deep learning model such as VGG16 and VGG19 models to use these extracted features as input to effectively classify the severity of salinity stress in rice seedlings as 1,3,5,7,9 scores. A comprehensive set of rice seedling images from field taken under various salinity stress conditions is used to assess the suggested method. The effectiveness of image processing techniques in improving the discriminatory power of deep learning models for salinity stress prediction is demonstrated by experimental results with 99.40%. The combination of image enhancement and feature extraction methods significantly improves the overall accuracy and reliability of the predictions, enabling farmers to make informed decisions regarding crop management and potential interventions to mitigate salinity stress.
- Research Article
5
- 10.1016/j.imavis.2024.105016
- Apr 5, 2024
- Image and Vision Computing
Detection of dental periapical lesions using retinex based image enhancement and lightweight deep learning model
- Research Article
19
- 10.1673/031.010.11701
- Jul 1, 2010
- Journal of Insect Science
The tropical armyworm, Spodoptera litura (F.) (Lepidoptera: Noctuidae), is an important pest of tobacco, Nicotiana tabacum L. (Solanales: Solanaceae), in South China that is becoming increasingly resistant to pesticides. Six potential trap crops were evaluated to control S. litura on tobacco. Castor bean, Ricinus communis L. (Malpighiales: Euphorbiaceae), and taro, Colocasia esculenta (L.) Schott (Alismatales: Araceae), hosted significantly more S. litura than peanut, Arachis hypogaea L. (Fabales: Fabaceae), sweet potato, Ipomoea batata Lam. (Solanales: Convolvulaceae) or tobacoo in a greenhouse trial, and tobacco field plots with taro rows hosted significantly fewer S. litura than those with rows of other trap crops or without trap crops, provided the taro was in a fast-growing stage. When these crops were grown along with eggplant, Solanum melongena L. (Solanales: Solanaceae), and soybean, Glycines max L. (Fabales: Fabaceae), in separate plots in a randomized matrix, tobacco plots hosted more S. litura than the other crop plots early in the season, but late in the season, taro plots hosted significantly more S. litura than tobacco, soybean, sweet potato, peanut or eggplant plots. In addition, higher rates of S. litura parasitism by Microplitis prodeniae Rao and Chandry (Hymenoptera: Bracondidae) and Campoletis chlorideae Uchida (Ichnumonidae) were observed in taro plots compared to other crop plots. Although taro was an effective trap crop for managing S. litura on tobacco, it did not attract S. litura in the seedling stage, indicating that taro should either be planted 20–30 days before tobacco, or alternative control methods should be employed during the seedling stage.
- Research Article
8
- 10.3390/agriculture13081496
- Jul 27, 2023
- Agriculture
Navigation line extraction is critical for precision agriculture and automatic navigation. A novel method for extracting navigation lines based on machine vision is proposed herein using a straight line detected based on a high-ridge crop row. Aiming at the low-level automation of machines in field environments of a high-ridge cultivation mode for broad-leaved plants, a navigation line extraction method suitable for multiple periods and with high timeliness is designed. The method comprises four sequentially linked phases: image segmentation, feature point extraction, navigation line calculation, and dynamic segmentation horizontal strip number feedback. The a* component of the CIE-Lab colour space is extracted to preliminarily extract the crop row features. The OTSU algorithm is combined with morphological processing to completely separate the crop rows and backgrounds. The crop row feature points are extracted using an improved isometric segmented vertical projection method. While calculating the navigation lines, an adaptive clustering method is used to cluster the adjacent feature points. A dynamic segmentation point clustering method is used to determine the final clustering feature point sets, and the feature point sets are optimised using lateral distance and point line distance methods. In the optimisation process, a linear regression method based on the Huber loss function is used to fit the optimised feature point set to obtain the crop row centreline, and the navigation line is calculated according to the two crop lines. Finally, before entering the next frame processing process, a feedback mechanism to calculate a number of horizontal strips for the next frame is introduced to improve the ability of the algorithm to adapt to multiple periods. The experimental results show that the proposed method can meet the efficiency requirements for visual navigation. The average time for the image processing of four samples is 38.53 ms. Compared with the least squares method, the proposed method can adapt to a longer growth period of crops.
- Research Article
3
- 10.1109/iscas46773.2023.10181356
- May 21, 2023
- IEEE International Symposium on Circuits and Systems proceedings. IEEE International Symposium on Circuits and Systems
Closed-loop sleep modulation is an emerging research paradigm to treat sleep disorders and enhance sleep benefits. However, two major barriers hinder the widespread application of this research paradigm. First, subjects often need to be wire-connected to rack-mount instrumentation for data acquisition, which negatively affects sleep quality. Second, conventional real-time sleep stage classification algorithms give limited performance. In this work, we conquer these two limitations by developing a sleep modulation system that supports closed-loop operations on the device. Sleep stage classification is performed using a lightweight deep learning (DL) model accelerated by a low-power field-programmable gate array (FPGA) device. The DL model uses a single channel electroencephalogram (EEG) as input. Two convolutional neural networks (CNNs) are used to capture general and detailed features, and a bidirectional long-short-term memory (LSTM) network is used to capture time-variant sequence features. An 8-bit quantization is used to reduce the computational cost without compromising performance. The DL model has been validated using a public sleep database containing 81 subjects, achieving a state-of-the-art classification accuracy of 85.8% and a F1-score of 79%. The developed model has also shown the potential to be generalized to different channels and input data lengths. Closed-loop in-phase auditory stimulation has been demonstrated on the test bench.
- Research Article
- 10.1007/s12672-026-04487-2
- Jan 24, 2026
- Discover oncology
Lung cancer is one of the major cancers worldwide, and rapid, accurate diagnosis is crucial for subsequent treatment and management. Currently, pathological subtype detection requires clinical experts to invest significant time and effort, making the development of automatic, efficient detection models essential. This study developed a novel deep learning model named BreezeNet for the recognition of lung adenocarcinoma, lung squamous cell carcinoma, and benign lung tissue. BreezeNet is a lightweight deep learning framework specifically designed for precise and automated diagnosis of lung adenocarcinoma, lung squamous cell carcinoma, and benign lung tissue. Compared with current mainstream deep learning models such as VGG, GoogleNet, and MobileNet, BreezeNet demonstrated superior performance in key metrics such as precision and accuracy. In our study, we developed a lightweight deep learning model named BreezeNet for the automatic classification of lung cancer cells. The experimental results show that BreezeNet performs excellently across various metrics, particularly in terms of the number of parameters. Specifically, BreezeNet achieved a precision of 0.9749, a recall of 0.9742, an F1-score of 0.9742, and an accuracy of 0.9789, which are slightly better than traditional deep learning models such as AlexNet, VGG, GoogleNet, ResNet, and MobileNet. However, the most significant advantage of BreezeNet lies in its parameter count, which is only 1,256,679, far lower than AlexNet's 14,587,587 and ResNet's 23,514,179. This means that our model is not only competitive in terms of performance but also significantly reduces the computational resource requirements, greatly enhancing the model's lightweight nature and deployment efficiency. Compared with traditional deep learning models such as AlexNet, VGG, and ResNet, BreezeNet achieves slightly better performance across all key metrics, with up to 1.6% higher accuracy, 1.76% higher F1-score, and over 18× fewer parameters, highlighting its superior lightweight design and diagnostic effectiveness. Our developed deep learning model can efficiently perform automated subtyping of lung cancer cells, providing accurate diagnostic recommendations for doctors. This will help improve the efficiency of lung cancer diagnosis, thereby enhancing patient survival rates.
- Research Article
37
- 10.3390/rs13142822
- Jul 18, 2021
- Remote Sensing
An accurate stand count is a prerequisite to determining the emergence rate, assessing seedling vigor, and facilitating site-specific management for optimal crop production. Traditional manual counting methods in stand assessment are labor intensive and time consuming for large-scale breeding programs or production field operations. This study aimed to apply two deep learning models, the MobileNet and CenterNet, to detect and count cotton plants at the seedling stage with unmanned aerial system (UAS) images. These models were trained with two datasets containing 400 and 900 images with variations in plant size and soil background brightness. The performance of these models was assessed with two testing datasets of different dimensions, testing dataset 1 with 300 by 400 pixels and testing dataset 2 with 250 by 1200 pixels. The model validation results showed that the mean average precision (mAP) and average recall (AR) were 79% and 73% for the CenterNet model, and 86% and 72% for the MobileNet model with 900 training images. The accuracy of cotton plant detection and counting was higher with testing dataset 1 for both CenterNet and MobileNet models. The results showed that the CenterNet model had a better overall performance for cotton plant detection and counting with 900 training images. The results also indicated that more training images are required when applying object detection models on images with different dimensions from training datasets. The mean absolute percentage error (MAPE), coefficient of determination (R2), and the root mean squared error (RMSE) values of the cotton plant counting were 0.07%, 0.98 and 0.37, respectively, with testing dataset 1 for the CenterNet model with 900 training images. Both MobileNet and CenterNet models have the potential to accurately and timely detect and count cotton plants based on high-resolution UAS images at the seedling stage. This study provides valuable information for selecting the right deep learning tools and the appropriate number of training images for object detection projects in agricultural applications.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.