Abstract
When disasters such as earthquakes and collapses occur, rescuers must go deep into dangerous regions and rush to transport the wounded. Long-term rescue operations in dangerous environments challenge the rescuers’ physical strength and threaten their lives. We propose a human-body reconstruction method for autonomous rescue operations with a wounded rescue robot. This method constructs a human body model that can support a robot’s autonomous rescue operations given limited scans of the injured. Most casualties lie on the ground in supine positions, so this scanning limit is typical. We propose a scanning viewpoint planning method using deep reinforcement learning and change the viewpoint with the help of a 6-degree-of-freedom (DOF) robotic arm to obtain a complete scan of the visible surface. For successful autonomous casualty rescue, a robot must have access to a watertight human body mesh with precise surfaces and associated semantics. To this end, we develop a neural network that predicts a completely clothed body mesh from visible point clouds. The results of experiments show that our method can reconstruct a watertight body mesh with accurate surfaces and human semantics to satisfy the needs of autonomous robotic casualty rescue operations under scanning limitations.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have