Abstract

Riemannian Optimization (RO) generalizes standard optimization methods from Euclidean spaces to Riemannian manifolds. Multidisciplinary Design Optimization (MDO) problems exist on Riemannian manifolds, and with the differential geometry framework which we have previously developed, we can now apply RO techniques to MDO. Here, we provide background theory and a literature review for RO and give the necessary formulae to implement the Steepest Descent Method (SDM), Newton’s Method (NM), and the Conjugate Gradient Method (CGM), in Riemannian form, on MDO problems. We then compare the performance of the Riemannian and Euclidean SDM, NM, and CGM algorithms on several test problems (including a satellite design problem from the MDO literature); we use a calculated step size, line search, and geodesic search in our comparisons. With the framework’s induced metric, the RO algorithms are generally not as effective as their Euclidean counterparts, and line search is consistently better than geodesic search. In our post-experimental analysis, we also show how the optimization trajectories for the Riemannian SDM and CGM relate to design coupling and thereby provide some explanation for the observed optimization behaviour. This work is only a first step in applying RO to MDO, however, and the use of quasi-Newton methods and different metrics should be explored in future research.

Highlights

  • There currently exist a variety of gradient-based optimization algorithms

  • It does not have multiple disciplines, but it has design variables and a state variable defined by a state equation (MDO problems have design variables and multiple state variables defined by state equations); we can solve its state equation in residual form with a root-finding algorithm; and we can calculate the necessary Riemannian optimization quantities using our Multidisciplinary Design Optimization (MDO) formulae

  • The optimization results for a calculated step size, line search, and geodesic search are shown in Tables 2, 3, and 4, respectively

Read more

Summary

Introduction

There currently exist a variety of gradient-based optimization algorithms These algorithms are well-understood and widely used, but most have been derived for flat spaces. It is possible to generalize these algorithms to curved spaces: Riemannian Optimization (RO) methods are gradient-based optimization algorithms derived for Riemannian manifolds. The literature only shows use of the traditional ‘‘flat’’ algorithms on MDO problems, but given that these curved spaces are, Riemannian manifolds, it makes sense to consider how RO algorithms might perform in an MDO context. We will consider some MDO problems and compare the Riemannian algorithms’ performance against that of the Euclidean algorithms. Given a Riemannian manifold M, the metric tensor gij defines an inner product, and this makes it possible to perform a number of different mathematical operations on the manifold. Note the use of index notation with the summation convention here and in the rest of this paper unless otherwise indicated

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call