Abstract

In this work we contribute to development of a “Human-like Visual-Attention-based Artificial Vision” system for boosting firefighters’ awareness about the hostile environment in which they are supposed to move along. Taking advantage from artificial visual-attention, the investigated system’s conduct may be adapted to firefighter’s way of gazing by acquiring some kind of human-like artificial visual neatness supporting firefighters in interventional conditions’ evaluation or in their appraisal of the rescue conditions of people in distress dying out within the disaster. We achieve such a challenging goal by combining a statistically-founded bio-inspired saliency detection model with a Machine-Learning-based human-eye-fixation model. Hybridization of the two above-mentioned models leads to a system able to tune its parameters in order to fit human-like gazing of the inspected environment. It opens appealing perspectives in computer-aided firefighters’ assistance boosting their awareness about the hostile environment in which they are supposed to evolve. Using as well various available wildland fires images’ databases as an implementation of the investigated concept on a 6-wheeled mobile robot equipped with communication facilities, we provide experimental results showing the plausibility as well as the efficiency of the proposed system.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.