Abstract

In current classification tasks, an important method to improve accuracy is to pre-train the model using a large-scale domain-specific dataset. However, many tasks such as writer identification (writerID) lack suitable large-scale datasets in practical scenarios. To address this issue, this paper proposes a method that can improve prediction accuracy without relying on significant pre-training but leveraging the diversity of probability distributions predicted by multiple networks, and enhancing the top-1 accuracy through complementary post-processing. Specifically, top-k distributions are sampled from the multiple probability mass functions separately. When the distribution differences of top-k are maximized, the intersection other than the correct category can be narrowed down. Finally, the correct target with suboptimal probability can be rectified by the only intersection. Furthermore, our method has exhibited an intriguing trait during experimentation. Its prediction accuracy enhances concurrently with the incorporation of novel SOTA methods, ultimately surpassing the performance of these new methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.