Abstract

Firefighters need to gain information from both inside and outside of buildings in first response emergency scenarios. For this purpose, drones are beneficial. This paper presents an elicitation study that showed firefighters’ desires to collaborate with autonomous drones. We developed a Human–Drone interaction (HDI) method for indicating a target to a drone using 3D pointing gestures estimated solely from a monocular camera. The participant first points to a window without using any wearable or body-attached device. Through the drone’s front-facing camera, the drone detects the gesture and computes the target window. This work includes a description of the process for choosing the gesture, detecting and localizing objects, and carrying out the transformations between coordinate systems. Our proposed 3D pointing gesture interface improves on 2D interfaces by integrating depth information with SLAM and solving ambiguity with multiple objects aligned on the same plane in a large-scale outdoor environment. Experimental results showed that our 3D pointing gesture interface obtained average F1 scores of 0.85 and 0.73 for precision and recall in simulation and real-world experiments and an F1 score of 0.58 at the maximum distance of 25 m between the drone and building.

Highlights

  • In emergency scenarios such as fires, a group of professionals called first responders arrives on the scene to gather as much information as possible about the current situation

  • We developed two applications: one using only 2D information, and another using 3D information captured from a point cloud obtained from a monocular simultaneous localization and mapping (SLAM) system

  • We showed that the sparse point cloud from ORBSLAM could detect the user position, and the translational movements ensured that the point cloud updated that position after a few moments

Read more

Summary

Introduction

In emergency scenarios such as fires, a group of professionals called first responders arrives on the scene to gather as much information as possible about the current situation. Certain pieces of information are critical to evaluating the situation, many of which can only be obtained from an aerial view. We found that certain information, e.g., apartment type, gas. A first responder sometimes uses manually controlled unmanned aerial vehicles (UAVs) to get a top view of the situation, but it is tough for a UAV pilot to navigate inside a building without direct visual contact. Autonomous UAVs can potentially perform this type of task. Before a UAV gets inside a building, a point of entry, like a window, has to be chosen. Human decisions should guide this window choice because the information gathered will first come from that entrance area, and it is up to the first responder team to access the situation and choose which area to investigate first

Objectives
Methods
Findings
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call