Abstract

Self-Supervised Representation Learning for Document Image Classification

Highlights

  • Deep learning methods have been effective in automatically learning generic representations which are useful for a wide range of tasks using task-specific labels

  • Our results show that self-supervised learning is effective in the case of limited labeled data, which has been noted in prior work for natural scene image classification [6], [7], [22]

  • DATASETS We evaluated the performance of self-supervised learning methods on two famous document image classification benchmarks i.e. RVL-CDIP [23] and Tobacco-3482 dataset [13]

Read more

Summary

Introduction

Deep learning methods have been effective in automatically learning generic representations which are useful for a wide range of tasks using task-specific labels. In order to circumvent this reliance on large labeled datasets, recent methods have explored the use of self-supervised representation learning techniques. These techniques excel at automatically learning useful input representations. Self-supervised learning has been commonly employed for learning rich language models [1]–[5]. These self-supervised techniques have been successfully leveraged in other domains including visual classification and speech recognition [6]–[8]

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.