Abstract
This paper presents a novel method for the solution of a particular class of structural optimzation problems: the continuous stochastic gradient method (CSG). In the simplest case, we assume that the objective function is given as an integral of a desired property over a continuous parameter set. The application of a quadrature rule for the approximation of the integral can give rise to artificial and undesired local minima. However, the CSG method does not rely on an approximation of the integral, instead utilizing gradient approximations from previous iterations in an optimal way. Although the CSG method does not require more than the solution of one state problem (of infinitely many) per optimization iteration, it is possible to prove in a mathematically rigorous way that the function value as well as the full gradient of the objective can be approximated with arbitrary precision in the course of the optimization process. Moreover, numerical experiments for a linear elastic problem with infinitely many load cases are described. For the chosen example, the CSG method proves to be clearly superior compared to the classic stochastic gradient (SG) and the stochastic average gradient (SAG) method.
Highlights
Introduction and problem statementIn the following, we define the set of Lebesgue integrable functions mapping from the space X to space Y by L1(X; Y ) and from the space X to the real numbers R by L1(X)
We introduced the continuous stochastic gradient method, which is applicable for the solution of a broad class of structural optimization problems
Preliminary experiments with notorious academic examples as well as an application from mechanics, in which an elastic structure has been optimized with respect to infinitely many load cases, revealed that the continuous stochastic gradient method (CSG) method performed better than both, the traditional stochastic gradient (SG) method and its relative, stochastic average gradient (SAG), in the sense that a significantly lower function value could be obtained in a defined number of iterations
Summary
We define the set of Lebesgue integrable functions mapping from the space X to space Y by L1(X; Y ) and from the space X to the real numbers R by L1(X). Throughout this paper, we further assume that the evaluation of the function f for any (u, v) ∈ Uad × Vad requires the solution of an underlying state problem, i. F (u, v) = j (u, y(u; v)), where y(u; v) denotes the solution of the state problem parameterized by the design u and the additional continuous index variable v ∈ Vad. As a consequence of this construction, an evaluation of the function F at a given design u theoretically requires the solution of infinitely many state problems. J is an integral over Vad, resulting in the problem as follows: min f (u, v) dv
Published Version (
Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have