Abstract

State of the art methods for video resizing usually produce perceivable visual discontinuities. Therefore, how to preserve the visual continuity in video frames is one of the most critical issues. In this paper, we propose a novel approach for modeling dynamic visual attention based on spatiotemporal analysis in order to detect the focus of interest automatically. The continuously varied co-sited blocks in a video cube are first detected and their variations are characterized as visual cubes, which are further employed to determine a proper extent of salient regions in video frames. Once the proper extent through video cubes is determined, the resizing process then can be conducted to find the global optimum. Our experiment shows that the proposed content-aware video resizing based on spatiotemporal visual cubes can effectively generate resized videos while keeping their isotropic manipulation and the continuous dynamics of visual perception.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call