Abstract
Aiming at the low efficiency of information processing in a robot vision system, our work studies how to extract fixed points during tasks so that significant and interesting target areas in the scene can be located. First, the architecture of a gaze extraction model of bionic vision is proposed, which consists of a spatial visual saliency model and a task-driven fixed point extraction model. The task-driven fixed point extraction process is represented as a closed-loop control problem. To couple exploration with the sampling process, a Q learning algorithm based on a random strategy is adopted, which is a top-down task-driven gaze extraction in the time dimension. It is fused with the spatial-based visual saliency model to form a spatiotemporal hybrid gaze extraction model to determine the fixed point in the final image. Finally, our qualitative image visualization experiment indicates that the fixed point extraction results of the model in two consecutive frames are close to the truth value obtained by an eye tracker. The quantitative area under the curve and average angular error data confirms the effectiveness of the model in predicting and extracting fixed points.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.