Abstract

Robotic interventions with redundant mobile manipulators pose a challenge for telerobotics in hazardous environments, such as underwater, underground, nuclear facilities, particle accelerators, aerial or space. Communication issues can lead to critical consequences, such as imprecise manipulation resulting in collisions, breakdowns and mission failures. The research presented in this paper was driven by the needs of a real robotic intervention scenario in the Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN). The goal of the work was to develop a framework for network optimisation in order to help facilitate Mixed Reality techniques such as 3D collision detection and avoidance, trajectories planning, real-time control, and automatized target approach. The teleoperator was provided with immersive interactions while preserving precise positioning of the robot. These techniques had to be adapted to delays, bandwidth limitation and their volatility in the 4G shared network of the real underground particle accelerator environment. The novel application-layer congestion control with automatic settings was applied for video and point cloud feedback. Twelve automatic setting modes were proposed with algorithms based on the camera frame rate, resolution, point cloud subsampling, network round-trip time and throughput to bandwidth ratio. Each mode was thoroughly characterized to present its specific use-case scenarios and the improvements it brings to the adaptive camera feedback control in teleoperation. Finally, the framework was presented according to which designers can optimize their Human-Robot Interfaces and sensor feedback depending on the network characteristics and task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call