Abstract

Tiny target recognition in automation is currently a hot research task that usually suffers from typical issues such as complex background, dim target, and slow detection speed. In the current study, a data-driven method is proposed to realize the posture recognition of micro-lens during optical device coupling to achieve accurate clamping of the gripper. First, we establish a pixel-by-pixel labeled optical micro-lens dataset named single-frame micro-lens target (SFMT), which provides data support for the subsequently proposed convolutional neural network. Subsequently, an asymmetric convolutional multi-level attention network (ACMANet) is proposed to realize accurate segmentation detection of micro-lenses by employing an embedded multi-scale asymmetric convolutional module (MACM) and a multi-level interactive attention module (MIAM). MACM achieves not only a reduction in computational complexity but also enhanced robustness for rotated image recognition through multi-scale asymmetric convolutional kernels. Furthermore, MIAM improves the accuracy of image segmentation by connecting the down-sampling and up-sampling stages and realizing the fusion of pixel position details and key channel features. Extensive experimental results based on our self-constructed image acquisition system demonstrate that the values of normalized intersection over union and dice are successively 91.41% and 95.50%, and the processing speed is 3.3 s/100 images, which shows the advance of ACMANet.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call