Abstract
This paper proposes an image quality assessment (IQA) method for image inpainting, aiming at selecting the best one from a plurality of results. It is known that inpainting results vary largely with the method used for inpainting and the parameters set. Thus, in a typical use case, users need to manually select the inpainting method and the parameters that yield the best result. This manual selection takes a great deal of time and thus there is a great need for a way to automatically estimate the best result. Unlike existing IQA methods for inpainting, our method solves this problem as a learning-based ordering task between inpainted images. This approach makes it possible to introduce auto-generated training sets for more effective learning, which has been difficult for existing methods because judging inpainting quality is quite subjective. Our method focuses on the following three points: (1) the problem can be divided into a set of “pairwise preference order estimation” elemental problems, (2) this pairwise ordering approach enables a training set to be generated automatically, and (3) effective feature design is enabled by investigating actually measured human gazes for order estimation.
Highlights
Photos sometimes include unwanted regions such as a person walking in front of a filming target or a trash can on a beautiful beach
– This is the first trial for applying learning-to-rank for IQA of inpainted images. – The proposed method enables automatically generated training data to be introduced by making good use of a ranking mechanism, the learning target is quite subjective. – It proposes new image features dedicated to inpainted image quality assessment on the basis of gaze measurement experiments
This section shows how we analyzed what we should focus on to assess the quality of the inpainted images on the basis of knowledge of human attention and corresponding subjective evaluations. We believe that this knowledge will be useful in developing an IQA method for image inpainting
Summary
Photos sometimes include unwanted regions such as a person walking in front of a filming target or a trash can on a beautiful beach. Unlike existing IQA methods for inpainting, our method has a new feature that does not use a computational visual saliency map but uses our investigation of human gaze while watching inpainted images. Another important proposal is automatic generation of training data. – The proposed method enables automatically generated training data to be introduced by making good use of a ranking mechanism, the learning target is quite subjective. This paper is based on our previous conference proceedings [16] and adds a comprehensive investigation on how the proposed features work and a novel method for accumulating training data automatically to improve estimation accuracy.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have