Abstract

Visual search becomes challenging when the time to find the target is limited. Here we focus on how performance in visual search can be improved via a subtle saliency-aware modulation of the scene. Specifically, we investigate whether blurring salient regions of the scene can improve participant’s ability to find the target faster when the target is located in non-salient areas. A set of real-world omnidirectional images were displayed in virtual reality with a search target overlaid on the visual scene at a pseudorandom location. Participants performed a visual search task in three conditions defined by blur strength, where the task was to find the target as fast as possible. The mean search time, and the proportion of trials where participants failed to find the target, were compared across different conditions. Furthermore, the number and duration of fixations were evaluated. A significant effect of blur on behavioral and fixation metrics was found using linear mixed models. This study shows that it is possible to improve the performance by a saliency-aware subtle scene modulation in a challenging realistic visual search scenario. The current work provides an insight into potential visual augmentation designs aiming to improve user’s performance in everyday visual search tasks.

Highlights

  • Visual search is one of the most common tasks in everyday life, be it when a person is looking for a friend in a crowd or when a doctor is analyzing an optical coherence tomography (OCT) scan from a patient [1]

  • This study shows that it is possible to improve the performance by a saliency-aware subtle scene modulation in a challenging realistic visual search scenario

  • The difficulty of the visual search task depends on various factors, including how similar the target and the background are, how distinct the target is from the distractors, how complex the scene is, whether the observer has seen the scene already before, and many other aspects [3,4,5]

Read more

Summary

Introduction

Visual search is one of the most common tasks in everyday life, be it when a person is looking for a friend in a crowd or when a doctor is analyzing an optical coherence tomography (OCT) scan from a patient [1]. Search becomes more challenging when the time to find the target is limited. In this study we focus on how performance in a visual search under limited time conditions can be improved. The difficulty of the visual search task depends on various factors, including how similar the target and the background are, how distinct the target is from the distractors, how complex the scene is, whether the observer has seen the scene already before, and many other aspects [3,4,5]. The human capacity to process visual content is limited, and mainly, in complex searches, it is crucial to select and prioritize visual information to complete the task

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call