Abstract

Digital imagery and video can have content at higher resolutions than can be projected by most data projectors, which has led to a variety of techniques to improve the high-resolution perception from lower-resolution displays. However, the downsampling procedures frequently used to fit an original image or video of high-resolution into a lower-resolution projector cause a frustrating loss of fine structures in the projected imagery. Since the human visual system is more sensitive to certain image phenomena, such as text and edges, an optimal approach to preserving fine structures should further sharpen such displayed content. On the other hand, the human visual system is also very sensitive to aliasing effects in motion, such that over-sharpening can lead to significant motion artifacts.In this paper, a new non-stationary content-adaptive resolution enhancement scheme is proposed. Our main objective in this study is to reduce the severity of artifacts due to the image enhancement processes. To achieve this goal, distribution-based text detection and hypothesis-testing-based motion detection methods are developed. Three spatial kernels, each constructed using a new band-limited Wiener deconvolution filter, are used to enhance a given image with different sharpening strengths, where the differently enhanced images are combined using a weighted non-stationary filter. For evaluation, a new visual projection assessment (VPA) dataset along with new metrics for quantifying motion artifacts are introduced. Experimental results show that the proposed non-stationary content-adaptive resolution enhancement scheme offers improved visual quality over the state-of-the-art while offering a reasonable balance between high text sharpness and reduced motion artifacts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call