We propose a descent subgradient algorithm for unconstrained nonsmooth nonconvex multiobjective optimization problems. To find a descent direction, we present an iterative process that efficiently approximates the ɛ-subdifferential of each objective function. To this end, we develop a new variant of Mifflin’s line search in which the subgradients are arbitrary and its finite convergence is proved under a semismooth assumption. To reduce the number of subgradient evaluations, we employ a backtracking line search that identifies the objectives requiring an improvement in the current approximation of the ɛ-subdifferential. Meanwhile, for the remaining objectives, new subgradients are not computed. Unlike bundle-type methods, the proposed approach can handle nonconvexity without the need for algorithmic adjustments. Moreover, the quadratic subproblems have a simple structure, and hence the method is easy to implement. We analyze the global convergence of the proposed method and prove that any accumulation point of the generated sequence satisfies a necessary Pareto optimality condition. Furthermore, our convergence analysis addresses a theoretical challenge in a recently developed subgradient method. Through numerical experiments, we observe the practical capability of the proposed method and evaluate its efficiency when applied to a diverse range of nonsmooth test problems.
Read full abstract