Abstract

AbstractIt is well known that the convex separation principle plays a fundamental role in many aspects of nonlinear analysis, optimization, and their applications. Actually the whole convex analysis revolves around using separation theorems for convex sets. In problems with nonconvex data separation theorems are applied to convex approximations. This is a conventional way to derive necessary optimality conditions in constrained optimization: first build tangential convex approximations of the problem data around an optimal solution in primal spaces and then apply convex separation theorems to get supporting elements in dual spaces (Lagrange multipliers, adjoint arcs, prices, etc.). For problems of nonsmooth optimization this approach inevitably leads to the usage of convex sets of normals and subgradients, whose calculus is also based on convex separation theorems.Mathematics Subject Classification49J4049J5049J5249K2449K2749K4049N4058C0658C2058C2565K0565L1290C2990C3190C4893B35

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call