Abstract

Aesthetic attributes are crucial for aesthetics because they explicitly present some photo quality cues that a human expert might use to evaluate a photo's aesthetic quality. However, annotating aesthetic attributes is a time-consuming, costly, and error-prone task, which leads to the issue that photos available are partially annotated with attributes. To alleviate this issue, we propose a novel semi-supervised adversarial learning method for photo aesthetic assessment from partially attribute-annotated photos, which can greatly reduce the reliance on manual attribute annotation. Specifically, the proposed method consists of a score-attributes generator R, a photo generator G, and a discriminator D. The score-attributes generator learns the aesthetic score and attributes simultaneously to capture their dependencies and construct better feature representations. The photo generator reconstructs the photo by feeding aesthetic attributes, score, and informative feature representation. A discriminator is used to force the convergence of the features-attributes-score tuples generated from the score-attributes generator, the photo generator, and the ground-truth distribution in labeled data for training data. The proposed method significantly outperforms the state of the art, increasing the Spearman rank-order correlation coefficient (SRCC) from the existing best reported of 0.726 to 0.761 on Aesthetic and attributes database and 0.756 to 0.774 on Aesthetic visual analysis database, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call