Abstract

The use of a video camera to support search immediately raises the question “for target location, what is the most effective way of presenting video camera output to a human observer (‘spotter’)”. We examine three presentation modes: (a) unprocessed video output, (b) ‘static visual presentation’ (SVP) in which a series of static views of the search area can be examined in turn while keeping up with drone movement, and (c), a novel mode called ‘Live SVP’ in which the locations sequentially captured by a camera are presented discretely in real time, thereby preserving any movement such as a person waving to attract attention. The dynamics of aerial video were modelled using game development software. The resulting videos were used to support realistic search exercises using human participants. The task attempted by each participant was the identification of lost school children in the simulated environment, using one of the video presentations described above. It was found that the new LSVP viewing mode is superior in those tested for moving targets in a low distraction environment. Another principal finding was that the density of distractors (i.e., non-target objects) had a significant influence on the success of target identification. KEY WORDS: Search, Rescue, Drone, Video, Target, Presentation

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call