Abstract

Pests are a major threat to the security of global agricultural production. Therefore, accurate identification of pests is vital for farmers to increase production and the associated income. In recent years, convolutional neural networks (CNNs) have become a mainstream method for pest identification. However, existing CNN-based approaches have a limitation due to the lack of key diverse feature representations, making it difficult to improve their recognition performance in large scale pest identification. To address the above limitation, we propose the hierarchical complementary network (HCNet) to capture pest feature representations and perform complementary fusion for obtaining hierarchical complementary information. Specifically, we first use a “ shallow to deep ” strategy to capture the hierarchical representations of the pest images. We then propose a spatial feature discrimination (SFD) module, which captures the key information in the hierarchical representations by boosting the spatial features of the current phase and suppressing the spatial features of the next phase. Finally, we design coordinate attention-guided feature complementary (CAFC) modules to fusion complementary information between features extracted from the SFD modules. Subsequently, we conduct experiments on the large scale pest dataset IP102. Without bells and whistles, the experimental results show that the proposed HCNet (ConvNext-B) achieves 75.36% accuracy on the test set, outperforming the existing state-of-the-art pest identification methods. Moreover, the proposed HCNet outperforms other state-of-the-art methods on different backbone networks. It will have a positive impact on the development of large scale pest identification methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call