Abstract

In recent years, the precise control of pests has become a concern for multiple departments. However, the accurate identification of misclassifications in eyewitness reports has remained a significant challenge. This study presents a comprehensive prediction and classification model designed for hornet sighting reports in Washington State. Leveraging deep learning, image analysis, and geographic location processing, the model aims to address the challenges associated with the accurate classification of reported sightings as positive, negative, or unverified. The methodology integrates transfer learning with ResNet and employs data augmentation techniques to enhance image-based predictions. The use of PyTorch facilitates neural network construction and training, leading to notable improvements in accuracy, especially in recognizing negative cases. Furthermore, geographic location processing introduces an innovative dimension, utilizing spatial information for distance-based classification. By combining the sigmoid function with geographical distances, predictions are refined, particularly for negative samples. An auxiliary function enhances predictions for samples lacking images. The practical prediction approach integrates image and location data, producing comprehensive results. The model evaluation demonstrates its efficacy through extensive data analysis. The significance of this study lies in its contribution to filling research gaps within related fields and supporting effective pest management, particularly in response to the threat posed by the Asian giant hornet. The obtained comprehensive model can accurately classify future collected eyewitness reports to guide relevant departments in their prevention and control strategies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call