Abstract

With the increasing availability of high-resolution images, videos, and 3D models, the demand for scalable large data processing techniques increases. We introduce a method of sparse dictionary learning for edit propagation of large input data. Previous approaches for edit propagation typically employ a global optimization over the whole set of pixels (or vertexes), incurring a prohibitively high memory and time-consumption for large input data. Rather than propagating an edit pixel by pixel, we follow the principle of sparse representation to obtain a representative and compact dictionary and perform edit propagation on the dictionary instead. The sparse dictionary provides an intrinsic basis for input data, and the coding coefficients capture the linear relationship between all pixels and the dictionary atoms. The learned dictionary is then optimized by a novel scheme, which maximizes the Kullback-Leibler divergence between each atom pair to remove redundant atoms. To enable local edit propagation for images or videos with similar appearance, a dictionary learning strategy is proposed by considering range constraint to better account for the global distribution of pixels in their feature space. We show several applications of the sparsity-based edit propagation, including video recoloring, theme editing, and seamless cloning, operating on both color and texture features. Our approach can also be applied to computer graphics tasks, such as 3D surface deformation. We demonstrate that with an atom-to-pixel ratio in the order of 0.01% signifying a significant reduction on memory consumption, our method still maintains a high degree of visual fidelity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call