Abstract

Current image feature extraction methods fail to adapt to the fine features of apple image texture, resulting in image matching errors and degraded image processing accuracy. A multi-view orthogonal image acquisition system was constructed with apples as the research object. The system consists of four industrial cameras placed around the apple at different angles and one camera placed on top. Following the image acquisition through the system, synthetic image pairs—both before and after transformation—were generated as the input dataset. This generation process involved each image being subjected to random transformations. Through learning to extract more distinctive and descriptive features, the deep learning-based keypoint detection method surpasses traditional techniques by broadening the application range and enhancing detection accuracy. Therefore, a lightweight network called ALIKE-APPLE was proposed for surface feature point detection. The baseline model for ALIKE-APPLE is ALIKE, upon which improvements have been made to the image feature encoder and feature aggregation modules. It comprises an Improved Convolutional Attention Module (ICBAM) and a Boosting Resolution Sampling Module (BRSM). The proposed ICBAM replaced max pooling in the original image feature encoder for downsampling. It enhanced the feature fusion capability of the model by utilizing spatial contextual information and learning region associations in the image. The proposed BRSM replaced the bilinear interpolation in the original feature aggregator for upsampling, overcoming the apple side image’s geometric distortion and effectively preserving the texture details and edge information. The model size was shrunk by optimizing the number of downsampling operations from the image encoder of the original model. The experimental results showed that the average number of observed keypoints and the average matching accuracy were improved by 166.41% and 37.07%, respectively, compared with the baseline model. The feature detection model of ALIKE-APPLE was found to perform better than the optimal SuperPoint. The feature point distribution of ALIKE-APPLE showed an improvement of 10.29% in average standard deviation (Std), 8.62% in average coefficient of variation (CV), and 156.12% in average feature point density (AFPD). Moreover, the mean matching accuracy (MMA) of ALIKE-APPLE improved by 125.97%. Thus, ALIKE-APPLE boasts a more consistent allocation of feature points and greater precision in matching.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.