Abstract

Font is a crucial design aspect, however, classifying fonts is challenging compared with that of other natural objects, as fonts differ from images. This paper presents the application of contrastive learning in font style classification. We conducted various experiments to demonstrate the robustness of contrastive image representation learning. First, we built a multilingual synthetic dataset for Chinese, English, and Korean fonts. Next, we trained the model using various contrastive loss functions, i.e., normalized temperature scaled cross-entropy loss, triplet loss, and supervised contrastive loss. We made some explicit changes to the approach of applying contrastive learning in the domain of font style classification by not applying any image augmentation. We compared the results with those of a fully supervised approach and achieved comparable results using contrastive learning with fewer annotated images and a smaller number of training epochs. In addition, we also evaluated the effect of applying different contrastive loss functions on training.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.