Abstract
The nonconvex optimization problems have recently attracted significant attention. However, both efficient algorithm and solid theory are still very limited. The difficulty is even pronounced for structured large-scale problems in many real-world applications. This article proposes an application-driven algorithmic framework for structured nonconvex optimization problems with distributed and parallel techniques, which jointly handles the high dimensionality of model parameters and distributed training data. The theoretical convergence of our algorithm is established under moderate assumptions. We apply the proposed method to popular multitask applications, including a multitask reinforcement learning problem. The promising performance demonstrates our framework is effective and efficient.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have