Abstract

This study proposes a unified method for a wide variety of image composition tasks. The proposed method is developed by constraining the responses of a set of filters to a target image. Each filter describes an attribute of the target image. Also, each attribute field is assumed to be equal to the corresponding attribute field of an input source. The constraints imposed by all those attributes are weighted heterogeneously and formulated into a minimisation problem. For different tasks, the required attributes (e.g. gradient, texture and colour constraints) can be specified by different sources (e.g. from a given image, constructed from several images or specified by users). The framework is flexible and can be configured to meet a variety of image editing tasks. To validate the effectiveness of this method, a variety of applications have been presented, including face data illumination removal, remote-sensing images fusion, texture transfer, multi-focus image fusion, seamless texture tiling and text layer transfer. The experimental results illustrate that the proposed method is effective in performance for the presented image editing tasks with comparisons to classical methods for the specified tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call