Abstract
Guided visual search is a common theme in HF applications. In this project we use large-scale databases of wildlife camera-trap imagery as a testbed for optimization of target highlighting. MegaDetector, a generic animal detection deep machine learning model, provides bounding boxes for potential targets within an image. In some cases, human observers are necessary to confirm or further classify the detections. Outlining the bounding box can direct human attention to the target AOI to improve observers’ classification speed and accuracy. However, this outline introduces visual clutter and crowding at the AOI boundary. In a first empirical study we investigate the use of padding to mitigate the effects of local clutter and compared different methods of visual highlighting (colored outline vs. blur outside of AOI). We found support for using padding to improve performance when animals were hard to see. Both colored outlines and blur were effective at directing observers’ attention.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.