Adjoint subgradient algorithms for nonsmooth optimization problems in Hilbert spaces

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

The subgradient method is a convenient algorithm for solving nondifferentiable convex optimization problems. We have extended this algorithm to nonsmooth optimization problems where the objective functions are either C-convex or C-quasiconvex, particularly within the framework of Hilbert spaces. In our work, we introduce novel methods termed adjoint Gutiérrez subgradient algorithms, which leverage adjoint operators, dual cones ( C ∗ ) and positive cones ( C ⊕ ) . Our study addresses both constrained and unconstrained optimization problems and rigorously examines the convergence of these methods.

Similar Papers
  • Conference Article
  • Cite Count Icon 2
  • 10.7551/978-0-262-32621-6-ch102
Constrained Group Counseling Optimization
  • Jul 30, 2014
  • Mohammed Eita + 2 more

Constrained Group Counseling Optimization

  • Book Chapter
  • 10.1007/978-981-19-6561-6_3
Multi-dimensional Variational Control Problem with Data Uncertainty in Constraint Functionals
  • Jan 1, 2022
  • Anurag Jayswal + 2 more

In the last few years, the applicability of the penalty function method, initiated by Zangwill [1] for the constrained optimization problem, has grown significantly. The penalty function approach transforms the constrained optimization problem into an unconstrained optimization problem and preserves the optimality of the original one. In this way, the solution sets of the unconstrained optimization problems ideally converge to the solution sets of the constrained optimization problems. The idea behind the penalty function approach (the convergence of the solution sets for constrained optimization problem and its associated unconstrained optimization problem) encourages the researchers to establish the equivalence between the solution set of constrained and unconstrained problems under suitable assumptions for different kinds of optimization problems. Antczak [2] used an exact \(l_{1}\) penalty function method in convex nondifferentiable multi-objective optimization problem and established the equivalence between the solution set of the original problem and its associated penalized problem. Also, Alvarez [3], Antczak [4] and Liu and Feng [5] explored the exponential penalty function method for multi-objective optimization problem and established the relationships between the constrained and unconstrained optimization problems. On the other hand, Li et al. [6] used the penalty function method to solve the continuous inequality constrained optimal control problem. Thereafter, Jayswal and Preeti [7] extended the applicability of the penalty function method for the multi-dimensional optimization problem. Moreover, Jayswal et al. [8] explored the same for uncertain optimization problem under convexity assumptions.

  • Research Article
  • Cite Count Icon 4
  • 10.1007/s11432-006-2007-5
Subgradient-based feedback neural networks for non-differentiable convex optimization problems
  • Aug 1, 2006
  • Science in China Series F: Information Sciences
  • Guocheng Li + 2 more

This paper developed the dynamic feedback neural network model to solve the convex nonlinear programming problem proposed by Leung et al. and introduced subgradient-based dynamic feedback neural networks to solve non-differentiable convex optimization problems. For unconstrained non-differentiable convex optimization problem, on the assumption that the objective function is convex coercive, we proved that with arbitrarily given initial value, the trajectory of the feedback neural network constructed by a projection subgradient converges to an asymptotically stable equilibrium point which is also an optimal solution of the primal unconstrained problem. For constrained non-differentiable convex optimization problem, on the assumption that the objective function is convex coercive and the constraint functions are convex also, the energy functions sequence and corresponding dynamic feedback subneural network models based on a projection subgradient are successively constructed respectively, the convergence theorem is then obtained and the stopping condition is given. Furthermore, the effective algorithms are designed and some simulation experiments are illustrated.

  • Research Article
  • Cite Count Icon 1
  • 10.1080/00207160.2013.854881
An inexact continuation accelerated proximal gradient algorithm for low n-rank tensor recovery
  • Jan 16, 2014
  • International Journal of Computer Mathematics
  • Huihui Liu + 1 more

The low n-rank tensor recovery problem is an interesting extension of the compressed sensing. This problem consists of finding a tensor of minimum n-rank subject to linear equality constraints and has been proposed in many areas such as data mining, machine learning and computer vision. In this paper, operator splitting technique and convex relaxation technique are adapted to transform the low n-rank tensor recovery problem into a convex, unconstrained optimization problem, in which the objective function is the sum of a convex smooth function with Lipschitz continuous gradient and a convex function on a set of matrices. Furthermore, in order to solve the unconstrained nonsmooth convex optimization problem, an accelerated proximal gradient algorithm is proposed. Then, some computational techniques are used to improve the algorithm. At the end of this paper, some preliminary numerical results demonstrate the potential value and application of the tensor as well as the efficiency of the proposed algorithm.

  • Book Chapter
  • 10.1017/cbo9781316134504.006
Optimization of Unconstrained Multivariable Functions
  • Feb 1, 2016
  • Suman Dutta

Introduction When the optimization of an objective function is required without any additional correlation, then this optimization is called unconstrained optimization. Unconstrained optimization problem appears in some cases in chemical engineering. It is the simplest multivariable optimization problem. Parameter estimation is a significant application in engineering and science, where, multivariable unconstrained optimization methods are required. Some optimization problems are inherently unconstrained; there is no additional function (section 2.7, 2.8). When there are some usual constraints on the variables, it is better to ignore these constraints and to consider that they do not have any impact on the optimal solution. Unconstrained problems also formed due to reconstructions of constrained optimization problems, in which the penalization terms are used to replace the constraints in the objective function that have the effect of discouraging constraint violations. Rarely do we get any unconstrained problem as a practical design problem, the knowledge on this type of optimization problems is essential for the following purposes: The constraints hold very less influence in some design problems. To get basic idea about constrained optimization, the study of unconstrained optimization techniques is necessary. Solving the unconstrained optimization problem is quite easy compared to constrained optimization. Some robust and efficient methods are required for the numerical optimization of any nonlinear multivariable objective functions. The efficiency of the algorithm is very significant because these optimization problems require an iterative solution process, and this trial and error becomes unfeasible when number of variables is more than three. Generally, it is very difficult to predict the behavior of nonlinear function; there may exist local minima or maxima, saddle points, regions of convexity, and concavity. Therefore, robustness (the capability to get a desired solution) is desirable for these methods. In some regions, the optimization algorithm may proceed quite slowly toward the optimum, demanding excessive computational time. In this chapter, we will discuss various nonlinear programming algorithms for unconstrained optimization. Formulation of Unconstrained Optimization For an unconstrained optimization problem, we minimize the objective function that is constructed with real variables, without any limitations on the values of these variables.

  • Research Article
  • Cite Count Icon 89
  • 10.1109/tac.2020.3001436
Fixed-Time Stable Gradient Flows: Applications to Continuous-Time Optimization
  • Jun 15, 2020
  • IEEE Transactions on Automatic Control
  • Kunal Garg + 1 more

Continuous-time optimization is currently an active field of research in optimization theory; prior work in this area has yielded useful insights and elegant methods for proving stability and convergence properties of the continuous-time optimization algorithms. This article proposes novel gradient-flow schemes that yield convergence to the optimal point of a convex optimization problem within a fixed time from any given initial condition for unconstrained optimization, constrained optimization, and min-max problems. It is shown that the solution of the modified gradient-flow dynamics exists and is unique under certain regularity conditions on the objective function, while fixed-time convergence to the optimal point is shown via Lyapunov-based analysis. The application of the modified gradient flow to unconstrained optimization problems is studied under the assumption of gradient dominance, a relaxation of strong convexity. Then, a modified Newton's method is presented that exhibits fixed-time convergence under some mild conditions on the objective function. Building upon this method, a novel technique for solving convex optimization problems with linear equality constraints that yields convergence to the optimal point in fixed time is developed. Finally, the general min-max problem is considered, and a modified saddle-point dynamics to obtain the optimal solution in fixed time is developed.

  • Research Article
  • Cite Count Icon 3
  • 10.1016/j.cor.2018.12.006
A numerical study of applying spectral-step subgradient method for solving nonsmooth unconstrained optimization problems
  • Dec 5, 2018
  • Computers & Operations Research
  • M Loreto + 2 more

A numerical study of applying spectral-step subgradient method for solving nonsmooth unconstrained optimization problems

  • Book Chapter
  • Cite Count Icon 15
  • 10.5772/6125
Multi-objective Uniform-diversity Genetic Algorithm (MUGA)
  • Nov 1, 2008
  • Ali Jamali + 2 more

Optimization in engineering design has always been of great importance and interest particularly in solving complex real-world design problems. Basically, the optimization process is defined as finding a set of values for a vector of design variables so that it leads to an optimum value of an objective or cost function. In such single-objective optimization problems, there may or may not exist some constraint functions on the design variables and they are respectively referred to as constrained or unconstrained optimization problems. There are many calculus-based methods including gradient approaches to search for mostly local optimum solutions and these are well documented in (Arora, 1989; Rao, 1996). However, some basic difficulties in the gradient methods such as their strong dependence on the initial guess can cause them to find a local optimum rather than a global one. This has led to other heuristic optimization methods, particularly Genetic Algorithms (GAs) being used extensively during the last decade. Such nature-inspired evolutionary algorithms (Goldberg, 1989; Back et al., 1997) differ from other traditional calculus based techniques. The main difference is that GAs work with a population of candidate solutions, not a single point in search space. This helps significantly to avoid being trapped in local optima (Renner & Ekart, 2003) as long as the diversity of the population is well preserved. In multi-objective optimization problems, there are several objective or cost functions (a vector of objectives) to be optimized (minimized or maximized) simultaneously. These objectives often conflict with each other so that as one objective function improves, another deteriorates. Therefore, there is no single optimal solution that is best with respect to all the objective functions. Instead, there is a set of optimal solutions, well known as Pareto optimal solutions (Srinivas & Deb, 1994; Fonseca & Fleming, 1993; Coello Coello & Christiansen, 2000; Coello Coello & Van Veldhuizen, 2002), which distinguishes significantly the inherent natures between single-objective and multi-objective optimization problems. V. Pareto (1848-1923) was the French-Italian economist who first developed the concept of multiobjective optimization in economics (Pareto, 1896). The concept of a Pareto front in the space of objective functions in multi-objective optimization problems (MOPs) stands for a set of solutions that are non-dominated to each other but are superior to the rest of solutions in the search space. Evidently, changing the vector of design variables in such a Pareto optimal O pe n A cc es s D at ab as e w w w .ite ch on lin e. co m

  • Research Article
  • 10.28924/ada/ma.5.9
Numerical Results for Gauss-Seidel Iterative Algorithm Based on Newton Methods for Unconstrained Optimization Problems
  • Apr 1, 2025
  • European Journal of Mathematical Analysis
  • Nguyen Dinh Dung

Optimization problems play a crucial role in various fields such as economics, engineering, and computer science. They involve finding the best value (maximum or minimum) of an objective function. In unconstrained optimization problems, the goal is to find a point where the function’s value reaches a maximum or minimum without being restricted by any conditions. Currently, there are many different methods to solve unconstrained optimization problems, one of which is the Newton method. This method is based on using a second-order Taylor series expansion to approximate the objective function. By calculating the first derivative (gradient) and second derivative (Hessian matrix) of the function, the Newton method determines the direction and step size to find the extrema. This method has a very fast convergence rate when near the solution and is particularly effective for problems with complex mathematical structures. In this paper, we introduce a Gauss-Seidel-type algorithm implemented for the Newton and Quasi-Newton methods, which is an efficient approach for finding solutions to optimization problems when the objective function is a convex functional. We also present some computational results for the algorithm to illustrate the convergence of the method.

  • Conference Article
  • 10.1109/raai56146.2022.10093006
Assessment of a Consolidated Algorithm for Constrained Engineering Design Optimization and Unconstrained Function Optimization
  • Dec 9, 2022
  • Stephen Oladipo + 1 more

For real-life optimization problems, methods with adequate capability in exploring the search space are crucial especially when having in mind the perpetual complexity of the problems. Consequently, presenting an effective algorithm to address these problems becomes imperative. The major objective of this work is to assess the application of a consolidated algorithm in addressing constrained and unconstrained function optimization problems. Though the flower pollinated algorithm (FPA) is commonly used, it does have its limitations, including being stuck at local minima, causing premature convergence, and creating imbalances between intensification and diversification. As the FPA operates, the solution to the optimization problem relies on communication with pollen individuals. Consequently, instead of leading pollens randomly, the FPA’s exploratory skills are boosted by employing the pathfinder algorithm’s (PFA) components to route them to much better locations in order to avoid local optima. For that reason, the PFA has been incorporated into the FPA in order to increase its performance. The efficacy of the proposed algorithm is tested using conventional mathematical optimization functions as well as two well-known constrained engineering design optimization problems. Experimental results showed that the suggested algorithm outscored its counterparts for both constrained and unconstrained optimization problems.

  • Research Article
  • Cite Count Icon 44
  • 10.1016/j.cie.2020.106634
Necessary and sufficient optimality conditions for non-linear unconstrained and constrained optimization problem with interval valued objective function
  • Jul 5, 2020
  • Computers & Industrial Engineering
  • Md Sadikur Rahman + 2 more

Necessary and sufficient optimality conditions for non-linear unconstrained and constrained optimization problem with interval valued objective function

  • Book Chapter
  • 10.5772/14969
Linear Evolutionary Algorithm
  • Apr 26, 2011
  • Kezong Tang + 3 more

During the past three decades, global optimization problems (including single-objective optimization problems (SOP) and multi-objective optimization problems (MOP)) have been intensively studied not only in Computer Science, but also in Engineering. There are many solutions in literature, such as gradient projection method [1-3], Lagrangian and augmented Lagrangian penalty methods [4-6], and aggregate constraint method [7-9]. Among these methods, penalty function method is an important approach to solve global optimization problems.. To obtain the optimal solution of the original problem, the first step is to convert the optimization problem into an unconstrained optimization problem with a certain penalty function (such as Lagrangian multiplier). As the penalty multiplier approaches zero or infinite, the iteration point might approach optimal too. However, at the same time, the objective function of the unconstrained optimization problem might gradually become worse. This leads to increased computational complexity and long computational time in implementing the penalty function method to solve the complex optimization problems. In most of the research, both the original constraints and objective function are required to be smooth (or differentiable). However, in real-world problem, it is seldom to be able to guarantee a derivative for of the specific complex optimization problem. Hence, the development of efficient algorithms for handling complex optimization problems is of great importance. In this chapter, we present a new framework and algorithm that can solve problems belong to the family of stochastic search algorithms, often referred to as evolutionary algorithms. Evolutionary algorithms (EAs) are stochastic optimization techniques based on natural evolution and survival of the fittest strategy found in biological organisms. Evolutionary algorithms have been successfully applied to solve complex optimization problems in business [10,11], engineering [12,13], and science [14,15]. Some commonly used EAs are Genetic algorithms (GAs)[16], Evolutionary Programming (EP)[17], Evolutionary Strategy (ES)[18] and Differential Evolution (DE)[19]. Each of these methods has its own characteristics, strengths and weaknesses. In general, a EA algorithm generate a set of initial solutions randomly based on the given seed and population size. Afterwards, it will go through evolution operations such as cross-over and mutation before evaluated by the

  • Conference Article
  • 10.1162/978-0-262-32621-6-ch102
Constrained Group Counseling Optimization
  • Jul 30, 2014
  • Mohammed Eita + 2 more

Constrained Group Counseling Optimization

  • Research Article
  • Cite Count Icon 34
  • 10.1080/02331930902928419
The modified subgradient algorithm based on feasible values
  • Jul 1, 2009
  • Optimization
  • Refail Kasimbeyli + 2 more

In this article, we continue to study the modified subgradient (MSG) algorithm previously suggested by Gasimov for solving the sharp augmented Lagrangian dual problems. The most important features of this algorithm are those that guarantees a global optimum for a wide class of non-convex optimization problems, generates a strictly increasing sequence of dual values, a property which is not shared by the other subgradient methods and guarantees convergence. The main drawbacks of MSG algorithm, which are typical for many subgradient algorithms, are those that uses an unconstrained global minimum of the augmented Lagrangian function and requires knowing an approximate upper bound of the initial problem to update stepsize parameters. In this study we introduce a new algorithm based on the so-called feasible values and give convergence theorems. The new algorithm does not require to know the optimal value initially and seeks it iteratively beginning with an arbitrary number. It is not necessary to find a global minimum of the augmented Lagrangian for updating the stepsize parameters in the new algorithm. A collection of test problems are used to demonstrate the performance of the new algorithm.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 3
  • 10.21533/pen.v5i3.120
Performance comparisons of current metaheuristic algorithms on unconstrained optimization problems
  • Oct 18, 2017
  • Periodicals of Engineering and Natural Sciences (PEN)
  • Umit Can + 1 more

Nature-inspired metaheuristic algorithms have been recognized as powerful global optimization techniques in the last few decades. Many different metaheuristic optimization algorithms have been presented and successfully applied to different types of problems. In this paper; seven of newest metaheuristic algorithms namely, Ant Lion Optimization, Dragonfly Algorithm, Grey Wolf Optimization, Moth-Flame Optimization, Multi-Verse Optimizer, Sine Cosine Algorithm, and Whale Optimization Algorithm have been tested on unconstrained benchmark optimization problems and their performances have been reported. Some of these algorithms are based on swarm while some are based on biology and mathematics. Performance analysis of these novel search and optimization algorithms satisfying equal conditions on benchmark functions for the first time has given important information about their behaviors on unimodal and multi-modal optimization problems. These algorithms have been recently proposed and many new versions of them may be proposed in future for efficient results in many different types of search and optimization problems.

More from: Optimization
  • New
  • Research Article
  • 10.1080/02331934.2025.2566113
A dynamical approach for bilevel equilibrium problems and its applications to control problems
  • Nov 5, 2025
  • Optimization
  • Kanchan Mittal + 3 more

  • New
  • Research Article
  • 10.1080/02331934.2025.2579724
Lagrange dualities for DC infinite optimization problems
  • Oct 31, 2025
  • Optimization
  • J F Bao + 4 more

  • New
  • Research Article
  • 10.1080/02331934.2025.2577805
New extremal principle for countable collection of sets in Asplund spaces
  • Oct 28, 2025
  • Optimization
  • Wei Ouyang + 3 more

  • New
  • Research Article
  • 10.1080/02331934.2025.2577808
Filled function method that avoids minimizing the objective function again
  • Oct 28, 2025
  • Optimization
  • Deqiang Qu + 3 more

  • New
  • Research Article
  • 10.1080/02331934.2025.2578403
Inertial primal-dual dynamics with Hessian-driven damping and Tikhonov regularization for convex-concave bilinear saddle point problems
  • Oct 28, 2025
  • Optimization
  • Xiangkai Sun + 2 more

  • Research Article
  • 10.1080/02331934.2025.2577807
The modified Levenberg-Marquardt method incorporating a new LM parameter and a nonmonotone scheme
  • Oct 24, 2025
  • Optimization
  • Jingyong Tang + 1 more

  • Research Article
  • 10.1080/02331934.2025.2574467
Optimality conditions and Lagrange dualities for a composite optimization problem involving nonconvex functions
  • Oct 24, 2025
  • Optimization
  • Lingli Hu + 1 more

  • Research Article
  • 10.1080/02331934.2025.2577413
Eckstein-Ferris-Pennanen-Robinson duality revisited: paramonotonicity, total Fenchel-Rockafellar duality, and the Chambolle-Pock operator
  • Oct 24, 2025
  • Optimization
  • Heinz H Bauschke + 2 more

  • Research Article
  • 10.1080/02331934.2025.2573666
Exact penalization at d-stationary points of cardinality- or rank-constrained problem
  • Oct 16, 2025
  • Optimization
  • Shotaro Yagishita + 1 more

  • Research Article
  • 10.1080/02331934.2025.2573668
Evolutionary differential variational-hemivariational inequalities: solvability and pullback attractor
  • Oct 16, 2025
  • Optimization
  • Xiuwen Li + 2 more

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon