Abstract

In this paper we use Machine Learning to convert images taken with an iPhone camera and visually alter them to appear as if taken with a Leica Sofort Instant Camera, more commonly known as the Polaroid look. While such image filters already exist and are highly effective, they function using ad-hoc techniques. Our goal is to achieve similar results by having a model learn what the Polaroid look is on its own and how many image pairs are required to train it. We found that using linear regression we need, on average, 800 images before the model began displaying good consistent results while using Pix2Pix (Isola et al., Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1125–1134, 2017) (Conditional Adversarial Networks) and CycleGAN (Goodfellow et al., Generative adversarial nets. In Ghahramani Z, Welling M, Cortes C, Lawrence ND, Weinberger KQ (eds) Advances in neural information processing systems, vol 27, pp 2672–2680. Curran Associates, Inc., Red Hook, NY, 2014 [Online]. Available: http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf) (Generative Adversarial Networks) only required 500 images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call