- Research Article
- 10.46298/jnsao-2025-15577
- Aug 5, 2025
- Journal of Nonsmooth Analysis and Optimization
- Ensio Suonperä + 1 more
Bilevel optimisation is used in inverse imaging problems for hyperparameter learning/identification and experimental design, for instance, to find optimal regularisation parameters and forward operators. However, computationally, the process is costly. To reduce this cost, recently so-called single-loop approaches have been introduced. On each step of an outer optimisation method, they take just a single gradient step towards the solution of the inner problem. In this paper, we flexibilise the inner algorithm to include standard methods in inverse imaging. Moreover, as we have recently shown, significant performance improvements can be obtained in PDE-constrained optimisation by interweaving the steps of conventional iterative linear system solvers with the optimisation method. We now demonstrate how the adjoint equation in bilevel problems can also benefit from such interweaving. We evaluate the performance of our approach on identifying the deconvolution kernel for image deblurring, and the subsampling operator for magnetic resonance imaging (MRI).
- Research Article
- 10.46298/jnsao-2024-12604
- Dec 18, 2024
- Journal of Nonsmooth Analysis and Optimization
- Nicolas Borchard + 1 more
We address second-order optimality conditions for optimal control problems involving sparsity functionals which induce spatio-temporal sparsity patterns. We employ the notion of (weak) second subderivatives. With this approach, we are able to reproduce the results from Casas, Herzog, and Wachsmuth (ESAIM COCV, 23, 2017, p. 263-295). Our analysis yields a slight improvement of one of these results and also opens the door for the sensitivity analysis of this class of problems.
- Research Article
- 10.46298/jnsao-2024-12366
- Jun 26, 2024
- Journal of Nonsmooth Analysis and Optimization
- Daniel Wachsmuth
In this paper, we consider optimization problems with $L^0$-cost of the controls. Here, we take the support of the control as independent optimization variable. Topological derivatives of the corresponding value function with respect to variations of the support are derived. These topological derivatives are used in a novel gradient descent algorithm with Armijo line-search. Under suitable assumptions, the algorithm produces a minimizing sequence.
- Research Article
2
- 10.46298/jnsao-2024-12235
- May 16, 2024
- Journal of Nonsmooth Analysis and Optimization
- Alberto De Marchi + 1 more
A broad class of optimization problems can be cast in composite form, that is, considering the minimization of the composition of a lower semicontinuous function with a differentiable mapping. This paper investigates the versatile template of composite optimization without any convexity assumptions. First- and second-order optimality conditions are discussed. We highlight the difficulties that stem from the lack of convexity when dealing with necessary conditions in a Lagrangian framework and when considering error bounds. Building upon these characterizations, a local convergence analysis is delineated for a recently developed augmented Lagrangian method, deriving rates of convergence in the fully nonconvex setting.
- Research Article
- 10.46298/jnsao-2024-10529
- Apr 29, 2024
- Journal of Nonsmooth Analysis and Optimization
- Lorena Bociu + 3 more
We revisit a class of integer optimal control problems for which a trust-region method has been proposed and analyzed in arXiv:2106.13453v3 [math.OC]. While the algorithm proposed in arXiv:2106.13453v3 [math.OC] successfully solves the class of optimization problems under consideration, its convergence analysis requires restrictive regularity assumptions. There are many examples of integer optimal control problems involving partial differential equations where these regularity assumptions are not satisfied. In this article we provide a way to bypass the restrictive regularity assumptions by introducing an additional partial regularization of the control inputs by means of mollification and proving a $\Gamma$-convergence-type result when the support parameter of the mollification is driven to zero. We highlight the applicability of this theory in the case of fluid flows through deformable porous media equations that arise in biomechanics. We show that the regularity assumptions are violated in the case of poro-visco-elastic systems, and thus one needs to use the regularization of the control input introduced in this article. Associated numerical results show that while the homotopy can help to find better objective values and points of lower instationarity, the practical performance of the algorithm without the input regularization may be on par with the homotopy.
- Research Article
1
- 10.46298/jnsao-2023-7139
- Apr 9, 2024
- Journal of Nonsmooth Analysis and Optimization
- Mattias Fält + 1 more
In this paper, we extend the previous convergence results for the generalized alternating projection method applied to subspaces in [arXiv:1703.10547] to hold also for smooth manifolds. We show that the algorithm locally behaves similarly in the subspace and manifold settings and that the same rates are obtained. We also present convergence rate results for when the algorithm is applied to non-empty, closed, and convex sets. The results are based on a finite identification property that implies that the algorithm after an initial identification phase solves a smooth manifold feasibility problem. Therefore, the rates in this paper hold asymptotically for problems in which this identification property is satisfied. We present a few examples where this is the case and also a counter example for when this is not.
- Research Article
2
- 10.46298/jnsao-2023-10433
- Sep 21, 2023
- Journal of Nonsmooth Analysis and Optimization
- Tuomo Valkonen
Point source localisation is generally modelled as a Lasso-type problem on measures. However, optimisation methods in non-Hilbert spaces, such as the space of Radon measures, are much less developed than in Hilbert spaces. Most numerical algorithms for point source localisation are based on the Frank-Wolfe conditional gradient method, for which ad hoc convergence theory is developed. We develop extensions of proximal-type methods to spaces of measures. This includes forward-backward splitting, its inertial version, and primal-dual proximal splitting. Their convergence proofs follow standard patterns. We demonstrate their numerical efficacy.
- Research Article
2
- 10.46298/jnsao-2023-10834
- Aug 11, 2023
- Journal of Nonsmooth Analysis and Optimization
- Livia Betz
Motivated by fatigue damage models, this paper addresses optimal control problems governed by a non-smooth system featuring two non-differentiable mappings. This consists of a coupling between a doubly non-smooth history-dependent evolution and an elliptic PDE. After proving the directional differentiability of the associated solution mapping, an optimality system which is stronger than the one obtained by classical smoothening procedures is derived. If one of the non-differentiable mappings becomes smooth, the optimality conditions are of strong stationary type, i.e., equivalent to the primal necessary optimality condition.
- Research Article
1
- 10.46298/jnsao-2023-10164
- Jul 25, 2023
- Journal of Nonsmooth Analysis and Optimization
- Paul Manns + 4 more
Binary trust-region steepest descent (BTR) and combinatorial integral approximation (CIA) are two recently investigated approaches for the solution of optimization problems with distributed binary-/discrete-valued variables (control functions). We show improved convergence results for BTR by imposing a compactness assumption that is similar to the convergence theory of CIA. As a corollary we conclude that BTR also constitutes a descent algorithm on the continuous relaxation and its iterates converge weakly-$^*$ to stationary points of the latter. We provide computational results that validate our findings. In addition, we observe a regularizing effect of BTR, which we explore by means of a hybridization of CIA and BTR.
- Research Article
6
- 10.46298/jnsao-2023-10290
- Jun 2, 2023
- Journal of Nonsmooth Analysis and Optimization
- Alberto De Marchi
We address composite optimization problems, which consist in minimizing the sum of a smooth and a merely lower semicontinuous function, without any convexity assumptions. Numerical solutions of these problems can be obtained by proximal gradient methods, which often rely on a line search procedure as globalization mechanism. We consider an adaptive nonmonotone proximal gradient scheme based on an averaged merit function and establish asymptotic convergence guarantees under weak assumptions, delivering results on par with the monotone strategy. Global worst-case rates for the iterates and a stationarity measure are also derived. Finally, a numerical example indicates the potential of nonmonotonicity and spectral approximations.