Abstract

This paper investigates the existence of a representative subset obtained from a large original dataset that can achieve the same performance level obtained using the entire dataset in the context of training neural language models. We employ the likelihood-based scoring method based on two distinct types of pre-trained language models to select a representative subset. We conduct our experiments on widely used 17 natural language processing datasets with 24 evaluation metrics. The experimental results showed that the representative subset obtained using the likelihood difference score can achieve the 90% performance level even when the size of the dataset is reduced to approximately two to three orders of magnitude smaller than the original dataset. We also compare the performance with the models trained with the same amount of subset selected randomly to show the effectiveness of the representative subset.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.