Abstract

The concept of touch saliency was recently introduced to generate image saliency maps based on human simple zoom behavior on touch devices. However, when browsing images on touch screen, users tend to apply a variety of touch behaviors such as pinch zoom, tap, double tap zoom, and scroll. Do these different behaviors correspond to different human attentions? Which behaviors are highly correlated with human eye fixation? How to learn a good image saliency map from various/multiple human behaviors? In this work, we design and conduct a series of studies to address these open questions. We also propose a novel touch saliency learning approach to derive a good image saliency map from a variety of human touch behaviors by using machine learning algorithm. The experimental results demonstrate the validity of our study and the potential and effectiveness of the proposed approach.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.