Abstract

With a manifold growth in the scale and intricacy of systems, the challenges of parametric misspecification become pronounced. These concerns are further exacerbated in compositional settings, which emerge in problems complicated by modeling risk and robustness. In “Data-Driven Compositional Optimization in Misspecified Regimes,” the authors consider the resolution of compositional stochastic optimization problems, plagued by parametric misspecification. In considering settings where such misspecification may be resolved via a parallel learning process, the authors develop schemes that can contend with diverse forms of risk, dynamics, and nonconvexity. They provide asymptotic and rate guarantees for unaccelerated and accelerated schemes for convex, strongly convex, and nonconvex problems in a two-level regime with extensions to the multilevel setting. Surprisingly, the nonasymptotic rate guarantees show no degradation from the rate statements obtained in a correctly specified regime and the schemes achieve optimal (or near-optimal) sample complexities for general T-level strongly convex and nonconvex compositional problems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call