Instances of Nonconvex Optimization
This chapter presents three examples of nonconvex optimization programs that can be solved (almost) exactly. The first example concerns quadratically constrained quadratic programs, whose treatment relies on the so-called S-lemma. The second example is dynamic programming, which is utilized to compute best approximants by sparse and disjointed vectors. The third example consists of projected gradient descent algorithms, including iterative hard thresholding algorithms.
- Research Article
8
- 10.1007/s10898-017-0542-9
- Jul 1, 2017
- Journal of Global Optimization
Primal or dual strong-duality (or min-sup, inf-max duality) in nonconvex optimization is revisited in view of recent literature on the subject, establishing, in particular, new characterizations for the second case. This gives rise to a new class of quasiconvex problems having zero duality gap or closedness of images of vector mappings associated to those problems. Such conditions are described for the classes of linear fractional functions and that of quadratic ones. In addition, some applications to nonconvex quadratic optimization problems under a single inequality or equality constraint, are presented, providing new results for the fulfillment of zero duality gap or dual strong-duality.
- Research Article
26
- 10.1016/j.apm.2020.03.004
- Mar 10, 2020
- Applied Mathematical Modelling
Trajectory planning based on non-convex global optimization for serial manipulators
- Research Article
15
- 10.1007/s10957-023-02285-2
- Sep 1, 2023
- Journal of Optimization Theory and Applications
This paper proposes a general framework for solving multiobjective nonconvex optimization problems, i.e., optimization problems in which multiple objective functions have to be optimized simultaneously. Thereby, the nonconvexity might come from the objective or constraint functions, or from integrality conditions for some of the variables. In particular, multiobjective mixed-integer convex and nonconvex optimization problems are covered and form the motivation of our studies. The presented algorithm is based on a branch-and-bound method in the pre-image space, a technique which was already successfully applied for continuous nonconvex multiobjective optimization. However, extending this method to the mixed-integer setting is not straightforward, in particular with regard to convergence results. More precisely, new branching rules and lower bounding procedures are needed to obtain an algorithm that is practically applicable and convergent for multiobjective mixed-integer optimization problems. Corresponding results are a main contribution of this paper. What is more, for improving the performance of this new branch-and-bound method we enhance it with two types of cuts in the image space which are based on ideas from multiobjective mixed-integer convex optimization. Those combine continuous convex relaxations with adaptive cuts for the convex hull of the mixed-integer image set, derived from supporting hyperplanes to the relaxed sets. Based on the above ingredients, the paper provides a new multiobjective mixed-integer solver for convex problems with a stopping criterion purely in the image space. What is more, for the first time a solver for multiobjective mixed-integer nonconvex optimization is presented. We provide the results of numerical tests for the new algorithm. Where possible, we compare it with existing procedures.
- Research Article
12
- 10.1016/j.jfranklin.2021.02.029
- Feb 26, 2021
- Journal of the Franklin Institute
A novel neural network to nonlinear complex-variable constrained nonconvex optimization
- Conference Article
10
- 10.24963/ijcai.2020/201
- Jul 1, 2020
Various types of parameter restart schemes have been proposed for proximal gradient algorithm with momentum to facilitate their convergence in convex optimization. However, under parameter restart, the convergence of proximal gradient algorithm with momentum remains obscure in nonconvex optimization. In this paper, we propose a novel proximal gradient algorithm with momentum and parameter restart for solving nonconvex and nonsmooth problems. Our algorithm is designed to 1) allow for adopting flexible parameter restart schemes that cover many existing ones; 2) have a global sub-linear convergence rate in nonconvex and nonsmooth optimization; and 3) have guaranteed convergence to a critical point and have various types of asymptotic convergence rates depending on the parameterization of local geometry in nonconvex and nonsmooth optimization. Numerical experiments demonstrate the convergence and effectiveness of our proposed algorithm.
- Conference Article
30
- 10.1109/cdc.2015.7402845
- Dec 1, 2015
Convex scenario optimization is a well-recognized approach to data-based optimization where the solution comes accompanied by precise generalization guarantees. It has been used in system identification as a driving methodology to construct interval prediction models. With this paper, scenario optimization breaks into the realm of non-convex optimization. In non-convex optimization, the number of scenarios that determine the solution - the so-called support scenarios - cannot be bounded beforehand, and one has to wait until the solution is computed to evaluate the size of the support scenario set. A theory is developed in this paper such that the generalization property of the solution is a-posteriori evaluated based on the registered number of support scenarios. This new perspective empowers the method and opens up new important possibilities for it to be applied to system identification involving non-convex optimization.
- Research Article
12
- 10.3390/s21072459
- Apr 2, 2021
- Sensors (Basel, Switzerland)
Phase reconstruction is in general a non-trivial problem when it comes to devices where the reference is not accessible. A non-convex iterative optimization algorithm is proposed in this paper in order to reconstruct the phase in reference-less spherical multiprobe measurement systems based on a rotating arch of probes. The algorithm is based on the reconstruction of the phases of self-transmitting devices in multiprobe systems by taking advantage of the on-axis top probe of the arch. One of the limitations of the top probe solution is that when rotating the measurement system arch, the relative phase between probes is lost. This paper proposes a solution to this problem by developing an optimization iterative algorithm that uses partial knowledge of relative phase between probes. The iterative algorithm is based on linear combinations of signals when the relative phase is known. Phase substitution and modal filtering are implemented in order to avoid local minima and make the algorithm converge. Several noise-free examples are presented and the results of the iterative algorithm analyzed. The number of linear combinations used is far below the square of the degrees of freedom of the non-linear problem, which is compensated by a proper initial guess. With respect to noisy measurements, the top probe method will introduce uncertainties for different azimuth and elevation positions of the arch. This is modelled by considering the real noise model of a low-cost receiver and the results demonstrate the good accuracy of the method. Numerical results on antenna measurements are also presented. Due to the numerical complexity of the algorithm, it is limited to electrically small- or medium-size problems.
- Research Article
8
- 10.1007/s10107-002-0310-5
- Dec 1, 2002
- Mathematical Programming
Recently, interior-point algorithms have been applied to nonlinear and nonconvex optimization. Most of these algorithms are either primal-dual path-following or affine-scaling in nature, and some of them are conjectured to converge to a local minimum. We give several examples to show that this may be untrue and we suggest some strategies for overcoming this difficulty.
- Research Article
20
- 10.1016/j.ejor.2006.09.097
- Sep 1, 2008
- European Journal of Operational Research
Nonconvex optimization using negative curvature within a modified linesearch
- Conference Article
3
- 10.1063/1.5089982
- Jan 1, 2019
- AIP conference proceedings
The paper addresses the nonconvex nonsmooth optimization problem with the cost function and equality and inequality constraints given by d.c. functions. The original problem is reduced to a problem without constraints with the help of the exact penalization theory. After that, the penalized problem is represented as a d.c. minimization problem without constraints, for which the new mathematical tools under the form of global optimality conditions (GOCs) are developed. The GOCs reduce the nonconvex problem in question to a family of convex (linearized with respect to the basic nonconvexities) problems. On the base of the proposed theory we develop numerical methods of local and global search for the problem in question.
- Research Article
31
- 10.1109/tpami.2019.2933841
- Aug 8, 2019
- IEEE Transactions on Pattern Analysis and Machine Intelligence
First-order non-convex Riemannian optimization algorithms have gained recent popularity in structured machine learning problems including principal component analysis and low-rank matrix completion. The current paper presents an efficient Riemannian Stochastic Path Integrated Differential EstimatoR (R-SPIDER) algorithm to solve the finite-sum and online Riemannian non-convex minimization problems. At the core of R-SPIDER is a recursive semi-stochastic gradient estimator that can accurately estimate Riemannian gradient under not only exponential mapping and parallel transport, but also general retraction and vector transport operations. Compared with prior Riemannian algorithms, such a recursive gradient estimation mechanism endows R-SPIDER with lower computational cost in first-order oracle complexity. Specifically, for finite-sum problems with n components, R-SPIDER is proved to converge to an ϵ-approximate stationary point within [Formula: see text] stochastic gradient evaluations, beating the best-known complexity [Formula: see text]; for online optimization, R-SPIDER is shown to converge with [Formula: see text] complexity which is, to the best of our knowledge, the first non-asymptotic result for online Riemannian optimization. For the special case of gradient dominated functions, we further develop a variant of R-SPIDER with improved linear rate of convergence. Extensive experimental results demonstrate the advantage of the proposed algorithms over the state-of-the-art Riemannian non-convex optimization methods.
- Conference Article
19
- 10.1109/camsap.2015.7383735
- Dec 1, 2015
We propose a variant of the classical conditional gradient method (CGM) for sparse inverse problems with differentiable measurement models. Such models arise in many practical problems including superresolution, time-series modeling, and matrix completion. Our algorithm combines nonconvex and convex optimization techniques: we propose global conditional gradient steps alternating with nonconvex local search exploiting the differentiable measurement model. This hybridization gives the theoretical global optimality guarantees and stopping conditions of convex optimization along with the performance and modeling flexibility associated with nonconvex optimization. Our experiments demonstrate that our technique achieves state-of-the-art results in several applications.
- Research Article
- 10.1080/02331934.2025.2569782
- Oct 7, 2025
- Optimization
Dynamical systems have inspired and explained several accelerated algorithms for a wide range of optimization problems. But due to the lack of smoothness and convexity of the objective functions in many real world applications, we cannot directly apply these accelerated algorithms in these situations. This paper proposes to apply a smoothing approximation approach to address non-smooth, non-convex machine learning optimization problems. Our work is motivated by the following goal: developing a direct method for finding critical points of objective functions of machine learning problems where functions are known to be non-smooth and non-convex. To achieve these goals, we establish the convergence of an alternative algorithm for smooth functions without convexity that supplements some recent results of Attouch et al.
- Research Article
18
- 10.1002/cta.525
- Aug 4, 2008
- International Journal of Circuit Theory and Applications
We present a framework for synthesizing low‐power analog circuits through global optimization over generally nonconvex multivariate polynomial objective function and constraints. Specifically, a nonconvex optimization problem is formed, which is then efficiently solved through convex programming techniques based on linear matrix inequality (LMI) relaxation. The framework allows both polynomial inequality and equality constraints, thereby facilitating more accurate device modelings and parameter tuning. Compared to traditional nonlinear programming (NLP), the proposed methodology exhibits superior computational efficiency, and guarantees convergence to a globally optimal solution. As in other physical design tasks, circuit knowledge and insight are critical for initial problem formulation, while the nonconvex optimization machinery provides a versatile tool and systematic way to locate the optimal parameters meeting design specifications. Two circuit design examples are given, namely, a nested transconductance(Gm)–capacitance compensation (NGCC) amplifier and a delta–sigma (ΔΣ) analog‐to‐digital converter (ADC), both of them being the key components in many electronic systems. Copyright © 2008 John Wiley & Sons, Ltd.
- Conference Article
3
- 10.1109/wispnet.2016.7566224
- Mar 1, 2016
A distance based network localization determines the positions of the nodes in the network subject to some distance constraints. The network localization problem may be modeled as a non-convex nonlinear optimization problem with distance constraints which are either convex or non-convex. Existing network localization algorithms either eliminate the non-convex distance constraints or relax them into convex constraints to employ the traditional convex optimization methods, e.g., SDP, for estimating positions of nodes with noisy distances. In practice, the estimated solution of such a converted problem gives errors due to the modification of constraints. In this paper, we employ the nonlinear Lagrangian method for non-convex optimization which efficiently estimates node positions solving the original network localization problem without any modification. The proposed method involves numerical computations. By increasing the number of iterations (not very high, usually less than hundred) in computations, a desired level of accuracy may be achieved.