Abstract

ABSTRACT In order to achieve real-time classification and detection of various chicken parts, this study introduces an optimized Swin-Transformer method for the classification and detection of multiple chicken parts. It initially leverages the Transformer’s self-attention structure to capture more comprehensive high-level visual semantic information from chicken part images. The image enhancement technique was applied to the image in the preprocessing stage to enhance the feature information of the image, and the migration learning method was used to train and optimize the Swin-Transformer model on the enhanced chicken parts dataset for classification and detection of chicken parts. Furthermore, this model was compared to four commonly used models in object target detection tasks: YOLOV3-Darknet53, YOLOV3-MobileNetv3, SSD-MobileNetv3, and SSD-VGG16. The results indicated that the Swin-Transformer model outperforms these models with a higher mAP value by 1.62%, 2.13%, 5.26%, and 4.48%, accompanied by a reduction in detection time by 16.18 ms, 5.08 ms, 9.38 ms, and 23.48 ms, respectively. The method of this study fulfills the production line requirements while exhibiting superior performance and greater robustness compared to existing conventional methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call