Abstract

Vector graphics, 2D or 3D, hold paramount significance across various professional domains, including graphic design, web design, architecture, and engineering. However, traditional methods of creating vector graphics are characterized by low efficiency. This review explores the integration of some deep learning models designed for 2D and 3D vector graphics generation and manipulation, summarizing their main tasks and methods. In terms of 2D vector graphics, this review examines advanced models, including Convolutional Neural Networks, Generative Adversarial Networks, and more, for diverse tasks such as font or icon generation and image manipulation. For 3D vector graphics, this paper assesses the progress achieved in models tailored for point cloud and image reconstruction, as well as 3D shape generation, using approaches such as Variational Autoencoders, Multi-Layer Perceptrons, and Transformers. This review also assesses their progress and limitations, acknowledging a comprehensive overview of deep learning models in vector graphic manipulation, and emphasizing their potential impact on the design industry while recognizing the challenges ahead.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call