Abstract

Vector optimization problems are a significant extension of scalar optimization and have wide range of application in various fields of economics, decision theory, game theory, information theory and optimal control theory. In this paper, unlike general subgradient methods, bundle methods and gradient sampling methods used to solve nonsmooth vector optimization problems which include scalarization approach, a subgradient method without usual scalariza- tion approached is proposed for minimizing a non-differentiable convex func- tion which works directly with vector-valued function. A general sub-gradient method for non-smooth convex optimization that includes regularization and interior point variants of Newton's Method are proposed. This algorithm builds a sequence of efficient points at the interior of epigraph of objective function which satisfies KKT conditions. In this paper, under the suitable conditions it is proved that the sequence generated by algorithm converges to ǫ-efficient point.

Highlights

  • Vector optimization problems are a significant extension of scalar optimization and have many real life applications

  • The scalarization approach enables the computation of efficient (Pareto) or weakly efficient solutions by formulating single objective optimization problems and choosing good parameters in advance

  • Unlike to smooth vector optimization problem, non-smooth vector optimization deals with the problems where the functions involved are not continuously differentiable

Read more

Summary

Introduction

Vector optimization problems are a significant extension of scalar optimization and have many real life applications. Most of the existing numerical methods for non-smooth vector optimization problems are either based on subgradient using scalarization approach and space dilation type algorithms [20, 22, 29]. Held and Karp [19], unaware of the work of Shor, develop a method for the travelling salesman problem that uses subgradient optimization to compute a bound in a Lagrangean Relaxation scheme. This seminal contribution led to others, see Fisher [10]. The convergence of standard derivative free methods like Powel’s method and genetic algorithms has been proved only for smooth functions in section 4 we describe the convergence analysis of the algorithm for non-smooth function

Preliminaries and Basic Definitions
A Sub-gradient Method for Vector Optimization
Convergence Result
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call