Abstract

Crop disease recognition plays a crucial role in agricultural production. However, disease images are large in scale and have a lot of redundant information, which reduces the effectiveness of deep neural networks in extracting diseases. To address the above issues and considering that not all image regions are relevant to disease recognition, this study proposes an efficient crop disease recognition method with dynamic reduction of image redundancy. The method is a two-stage process. In the first stage, we employ the lightweight CA-AnchorNet, which incorporates coordinate attention, to swiftly generate a feature map of the affected crop areas. Subsequently, class activation maps (CAMs) are utilized to identify the disease feature regions, highlighting areas that exhibit class discriminability. These regions are then mapped to a higher resolution from the original lower-resolution image, and the target patch is extracted. In the second stage, these local semantic patches, characterized by reduced spatial redundancy, are fed into the lightweight PatchNet for accurate recognition. PatchNet incorporates the Inception-C module and the ACON-C activation function. These features enhance the model's ability to express multi-scale and non-linear characteristics of crop disease features. This method does not require manually annotated region boxes and achieves an identification accuracy of 99.86 % on a 12-class crop disease dataset with complex environments. The parameter count is only 0.98 M. This method has the characteristics of accurate localization and low parameter count, and can be used for effective and high-precision recognition of crop diseases in complex environments.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.