Abstract

An extensive research work has been done in the last years to develop Visual Attention (VA) models for 2D, stereoscopic 3D images and videos or more recently for Virtual Reality and 360°. Reliable VA models are helpful in order to design efficient approaches for several applications, such as coding, streaming, foveated rendering, cinematography, movie editing, and Quality of Experience (QoE) evaluation. In this talk, I will review the current status on VA: advances and challenges from user study to modeling and benchmarking. A special focus will be dedicated to omnidirectional content. I will also illustrate how studying visual attention deployment of visual impaired people can help to improve VA computational modeling.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call