Abstract
The automatic design has become a popular topic in the application field of computer vision technologies. Previous methods for automatic design are mostly saliency-based, relying on an off-the-shelf model for saliency map detection and hand-crafted aesthetic rules for ranking on multiple proposals. We argue that the multi-stage generation and the excessive reliance on saliency map hindered the progress of pursuing better automatic design solutions. In this work, we explore the possibility of a saliency-free solution in a representative scenario, automatic poster design. We propose a novel end-to-end framework to solve the automatic poster design problem, which is divided into the layout prediction and attributes identification sub-tasks. We design a neural network based on multi-modality feature extraction to learn the two sub-tasks jointly. We train the deep neural network in our framework with automatically extracted supervision from semi-structured posters, bypassing a large amount of required manual labor. Both qualitative and quantitative results show the impressive performance of our end-to-end approach after discarding the explicit saliency detection module. Our system learned on self-supervision performs well on the automatic design by learning aesthetic constraints implicitly in the neural networks.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.