Abstract

In order to build computer vision systems with good generalization capability, one usually needs large-scale, diversified labeled image data for learning and evaluating the in-hand computer vision models. Since it is difficult to obtain satisfactory image data from real scenes, in this paper we propose a unified theoretical framework for image generation, called parallel imaging. The core component of parallel imaging is software-defined artificial imaging systems. Artificial imaging systems receive small-scale image data collected from real scenes, and then generate large amounts of artificial image data. In this paper, we survey the realization methods of parallel imaging, including graphics rendering, image style transfer, generative models, and so on. Furthermore, we compare the properties of artificial images and actual images, and discuss the domain adaptation strategies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call