Abstract

Nonsmooth convex optimization problems are solved over fixed point sets of nonexpansive mappings by using a distributed optimization technique. This is done for a networked system with an operator, who manages the system, and a finite number of users, by solving the problem of minimizing the sum of the operator’s and users’ nondifferentiable, convex objective functions over the intersection of the operator’s and users’ convex constraint sets in a real Hilbert space. We assume that each of their constraint sets can be expressed as the fixed point set of an implementable nonexpansive mapping. This setting allows us to discuss nonsmooth convex optimization problems in which the metric projection onto the constraint set cannot be calculated explicitly. We propose a parallel subgradient algorithm for solving the problem by using the operator’s attribution such that it can communicate with all users. The proposed algorithm does not use any proximity operators, in contrast to conventional parallel algorithms for nonsmooth convex optimization. We first study its convergence property for a constant step-size rule. The analysis indicates that the proposed algorithm with a small constant step size approximates a solution to the problem. We next consider the case of a diminishing step-size sequence and prove that there exists a subsequence of the sequence generated by the algorithm which weakly converges to a solution to the problem. We also give numerical examples to support the convergence analyses.

Highlights

  • Convex optimization theory has been widely used to solve practical convex minimization problems over complicated constraints, e.g., convex optimization problems with a fixed point constraint [ – ] and with a variational inequality constraint [ – ]

  • This paper focuses on a networked system consisting of an operator, who manages the system, and a finite number of participating users, and it considers the problem of minimizing the sum of the operator’s and all users’ nonsmooth convex functions over the intersection of the operator’s and all users’ fixed point constraint sets in a real Hilbert space

  • We show that our algorithm with a small constant step size approximates a solution to the problem of minimizing the sum of nonsmooth, convex functions over the fixed point sets of nonexpansive mappings

Read more

Summary

Introduction

Convex optimization theory has been widely used to solve practical convex minimization problems over complicated constraints, e.g., convex optimization problems with a fixed point constraint [ – ] and with a variational inequality constraint [ – ]. We propose a parallel subgradient algorithm for nonsmooth convex optimization with fixed point constraints. We show that our algorithm with a small constant step size approximates a solution to the problem of minimizing the sum of nonsmooth, convex functions over the fixed point sets of nonexpansive mappings. We show that there exists a subsequence of the sequence generated by our algorithm with a diminishing step size which weakly converges to a solution to the problem. Section presents the parallel subgradient algorithm for solving the main problem and studies its convergence properties for a constant step size and a diminishing step size. ) for examples of convex functions for which proximity operators can be explicitly computed.) When (λn)n∈N satisfies. There exists a subsequence of (xn)n∈N such that it weakly converges to a point in X

Numerical examples
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call