Abstract

AbstractEnforcement of advanced deep learning methods in hand‐object pose estimation is an imperative method for grasping the objects safely during the human–robot collaborative tasks. The position and orientation of a hand‐object from a two‐dimensional image is still a crucial problem under various circumstances like occlusion, critical lighting, and salient region detection and blur images. In this paper, the proposed method uses an enhanced MobileNetV3 with single shot detection (SSD) and YOLOv5 to ensure the improvement in accuracy and without compromising the latency in the detection of hand‐object pose and its orientation. To overcome the limitations of higher computation cost, latency and accuracy, the Network Architecture Search and NetAdapt Algorithm is used in MobileNetV3 that perform the network search for parameter tuning and adaptive learning for multiscale feature extraction and anchor box offset adjustment due to auto‐variance of weight in the level of each layers. The squeeze‐and‐excitation block reduces the computation and latency of the model. Hard‐swish activation function and feature pyramid networks are used to prevent over fitting the data and stabilizing the training. Based on the comparative analysis of MobileNetV3 with its predecessor and YOLOV5 are carried out, the obtained results are 92.8% and 89.7% of precision value, recall value of 93.1% and 90.2%, mAP value of 93.3% and 89.2%, respectively. The proposed methods ensure better grasping for robots by providing the pose estimation and orientation of hand‐objects with tolerance of −1.9 to 2.15 mm along x, −1.55 to 2.21 mm along y, −0.833 to 1.51 mm along z axis and −0.233° to 0.273° along z‐axis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call