Updating the equipment of the first responder (FR) by providing them with new capabilities and useful information will inevitably lead to better mission success rates and, therefore, more lives saved. This paper describes the design and implementation of a modular interface for augmented reality displays integrated into standard FR equipment that will provide support during the adverse-visibility situations that the rescuers find during their missions. This interface includes assistance based on the machine learning module denoted as Robust Vision Module, which detects relevant objects in a rescue scenario, particularly victims, using the feed from a thermal camera. This feed can be displayed directly alongside the detected objects, helping FRs to avoid missing anything during their operations. Additionally, the information exposition in the interface is organized according to the biometrical parameters of FRs during the operations. The main novelty of the project is its orientation towards useful solutions for FRs focusing on something occasionally ignored during research projects: the point of view of the final user. The functionalities have been designed after multiple iterations between researchers and FRs, involving testing and evaluation through realistic situations in training scenarios. Thanks to this feedback, the overall satisfaction according to the evaluations of 18 FRs is 3.84 out of 5 for the Robust Vision Module and 3.99 out of 5 for the complete AR interface. These functionalities and the different display modes available for the FRs to adapt to each situation are detailed in this paper.
Read full abstract