Imbalanced data distributions are prevalent in real-world classification and regression tasks. Data augmentation is a commonly employed technique to mitigate this issue, with implicit methods gaining attention for their effectiveness and efficiency. However, implicit data augmentation methods have not been extensively explored in the context of regression tasks. To address this gap, we introduce IRDA, a novel learning method for regression that incorporates implicit data augmentation. Our approach includes developing a new augmentation strategy specifically tailored for deep imbalanced regression tasks, and a regression loss function that is suitable for real-world data with imbalanced label distributions and non-uniformly distributed features. We derive an easily computable surrogate loss and propose two implicit data augmentation algorithms, one incorporating meta-learning and one without. Additionally, we provide a regularization perspective to offer a deeper understanding of IRDA. We evaluate IRDA on five datasets, including a large-scale dataset, demonstrating its effectiveness in mitigating the adverse effects of imbalanced data distribution and its adaptability to various regression tasks.
Read full abstract