Abstract

We report posing biases in portraits of people that do not exist. Studies of painted or photographed portraiture have often reported such biases. However, whether these truly exist or are mere sampling artifacts remains open to question. A novel approach to such a question is provided by contemporary applications generating photo-realistic virtual portraits. Such applications are exposed to large datasets of portraits of real people. A neural network then maps the variation of the original input set to a huge-dimensional generative model capturing the variation in the original data, which is then used to synthesize the virtual portraits. We reasoned that, if posing biases exist in the original input, they should also be observable in the network output, and they did. This finding provides novel support for the reality of posing biases in portraiture-and helps us better understand what generative networks actually do.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call