Abstract

Recent advancements in image processing have significantly improved pest detection and classification in peanut crops. Our study introduces an innovative approach that optimizes image features for accurate pest identification. Leveraging insights from successful image analysis methodologies, our model employs a tailored architecture for pest detection, segmentation, and classification tasks. By integrating dual branch segment representations and a dual-layer transformer encoder, we aim to enhance image representations and consolidate pest image segments of varying sizes. We evaluate our approach using three distinct pest datasets—Aphids, Wireworm, and Gram Caterpillar—ensuring comprehensive analysis and model validation. Prior to training, we preprocess the datasets extensively, employing feature extraction techniques and addressing image quality issues. We then apply normalization procedures to standardize the data for seamless integration into our model architecture. Our methodology focuses on extracting key features through self-attention mechanisms and standardized scaling processes to enhance predictive capabilities. Comprehensive experimentation demonstrates the superiority of our approach, outperforming established benchmarks in pest detection and classification with high accuracy rates. In summary, our study presents a novel framework that optimizes feature extraction and enhances predictive accuracy in pest detection and classification for peanut crops, addressing the unique challenges of agricultural pest identification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call