Abstract
The combination of teleoperating robots and the Internet of Things (IoT) could be employed in many areas, including remote nursing and semi-mechanical control. However, it is known that subjects can quickly encounter physical and mental fatigue, which can potentially lower accuracy in teleoperation. To address this issue, this article presents a closed-loop teleoperation system based on multisensory fusion, visual, and haptic feedback within the IoT framework. Various sensors for electromyography, inertial measurement unit, and mechanical hand control system are deployed to obtain body signals from participants, subsequently processed by artificial intelligence methods. Resistive sensors are installed at the robotic hand side to learn the contact force between the robot hand and the grasped object. To allow the user to understand the contact force levels, a haptic interface is equipped to provide three-level mechanical vibrations. To help users identify and trace different objects, a region convolution neural network for detecting 100 different categories of things is constructed. In addition, the real-time control (delay <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mathbf {< } 110$ </tex-math></inline-formula> ms) is offered by a tri-layered IoT architecture. The feasibility of the proposed technique is validated by a grasping task carried out by 20 volunteers. During experiments, it is observed that merely 10.8 s is averagely needed for participants in finishing the task, with a high average success rate of 97%. Besides, NASA-TLX questionnaire and maximum voluntary contraction test report that users suffer light mental and physical fatigue when the proposed technique is used.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have