Abstract
This paper aims to generate a better representation of visual arts, which plays a key role in visual arts analysis works. Museums and galleries have a large number of artworks in the database, hiring art experts to do analysis works (e.g., classification, annotation) is time consuming and expensive and the analytic results are not stable because the results highly depend on the experiences of art experts. The problem of generating better representation of visual arts is of great interests to us because of its application potentials and interesting research challenges---both content information and each unique style information within one artwork should be summarized and learned when generating the representation. For example, by studying a vast number of artworks, art experts summary and enhance the knowledge of unique characteristics of each visual arts to do visual arts analytic works, it is non-trivial for computer. In this paper, we present a unified framework, called DeepArt, to learn joint representations that can simultaneously capture contents and style of visual arts. This framework learns unique characteristics of visual arts directly from a large-scale visual arts dataset, it is more flexible and accurate than traditional handcraft approaches. We also introduce Art500k, a large-scale visual arts dataset containing over 500,000 artworks, which are annotated with detailed labels of artist, art movement, genre, etc. Extensive empirical studies and evaluations are reported based on our framework and Art500k and all those reports demonstrate the superiority of our framework and usefulness of Art500k. A practical system for visual arts retrieval and annotation is implemented based on our framework and dataset. Code, data and system are publicly available at http://deepart.ece.ust.hk.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.