Abstract
The use of a video camera to support search immediately raises the question “for target location, what is the most effective way of presenting video camera output to a human observer (‘spotter’)”. We examine three presentation modes: (a) unprocessed video output, (b) ‘static visual presentation’ (SVP) in which a series of static views of the search area can be examined in turn while keeping up with drone movement, and (c), a novel mode called ‘Live SVP’ in which the locations sequentially captured by a camera are presented discretely in real time, thereby preserving any movement such as a person waving to attract attention. The dynamics of aerial video were modelled using game development software. The resulting videos were used to support realistic search exercises using human participants. The task attempted by each participant was the identification of lost school children in the simulated environment, using one of the video presentations described above. It was found that the new LSVP viewing mode is superior in those tested for moving targets in a low distraction environment. Another principal finding was that the density of distractors (i.e., non-target objects) had a significant influence on the success of target identification. KEY WORDS: Search, Rescue, Drone, Video, Target, Presentation
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.