Abstract

Data augmentation with mixup has been proven effective in various machine learning tasks. However, previous methods primarily concentrate on generating previously unseen virtual examples using randomly selected mixed samples, which may overlook the importance of similar spatial distributions. In this work, we extend mixup and propose MbMix, a novel yet simple training approach designed for implementing mixup with memory batch augmentation. MbMix specifically selects the samples to be mixed via memory batch to guarantee that the generated samples have the same spatial distribution as the dataset samples. Conducting extensive experiments, we empirically validate that our method outperforms several mixup methods across a broad spectrum of text classification benchmarks, including sentiment classification, question type classification, and textual entailment. Of note, our proposed method achieves a 5.61% improvement compared to existing approaches on the TREC-fine benchmark. Our approach is versatile, with applications in sentiment analysis, question answering, and fake news detection, offering entrepreneurial teams and students avenues to innovate. It enables simulation and modeling for student ventures, fostering an entrepreneurial campus culture and mindset.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.