To adapt images for diversified digital devices, researchers have presented many image retargeting methods. However, the consistency between results of objective image retargeting quality assessment (IRQA) metrics and subjective perception is still low. In this paper, we propose a visual attention fusion (VAF) framework to assist IRQA metrics in better understanding the features of images such as image saliency, faces, and lines. First, we combine the results of multiple salient object detection algorithms to reduce the limitations of a single algorithm. Second, faces and lines are considered in our framework to measure deformations to these visually sensitive regions. Finally, we propose a saliency enhancement model to simulate human visual attention for IRQA. We combine the proposed VAF framework with some state-of-the-art IRQA metrics. Experimental results show that the proposed VAF framework can improve the consistency between the results of objective IRQA metrics and subjective opinion scores.
Read full abstract