Abstract
We introduce a scalable and fast method for solving distributionally robust optimization (DRO). Previous works have demonstrated that DRO outperforms empirical risk on a collection of inconsistent distribution of test data (the property of “uncertainty set”). However, DRO is hard to be applied for large-scale datasets and large parameterized model, due to the datapoint-level and non-differentiable objective function. In this paper, we formalize the DRO problem with the supremum of a family of subgroup-level loss functions. Subgroup loss is the cost function of partitioned uncertainty set. Then we implement the maximum of subgroup loss as the objective function and update model parameters by reweighting the descent direction, calculated from a differentiable objective function. Experimental results unveil that large parameterized models with the proposed method successfully adapt to uncertainty set whether the distribution contains out-of-domain or imbalanced property. Remarkably, with the explored reweighting strategy, the proposed algorithm effectively achieves competitive performance and robustness.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.