Abstract
Uncertainty quantification of numerical simulations has raised significant interest in recent years and, as a consequence, the interest in a procedure of optimization under uncertainty. One of the main challenges in this field is the efficiency in propagating uncertainties from the sources to the quantities of interest, especially when there are many sources of uncertainties. Other important challenges are the coupling of the optimization procedure with the uncertainty quantification routines, usually approached as two independent problems, and the necessity to perform efficiently a massive ensemble of numerical simulations. The primary goals of this work are to develop algorithms for efficient uncertainty quantification and optimization under uncertainty and to use them in industrial applications. We first introduce the a novel way to perform uncertainty quantification based on simplex elements on the probability space and we prove its effectiveness in real life problems. We prove that this algorithm requires a fewer number of evaluations of the quantity of interest with respect to widely used approach adopted in this field of study. This is particular important in a process of optimization under uncertainty where the cost of the deterministic optimization is raised up by the presence of a nested uncertainty quantification algorithm. We will review the state of the art for optimization under uncertainty in order to introduce novel methodologies that overcome the limitations of the actual framework. These novel formulations contemplate the full identity card of a system analyzed under uncertainty - the Cumulative Distribution Function. A methodology to approach single-objective problems with an a posteriori selection of the candidate design based on risk/opportunity criteria of the designer will be presented and assessed. Therefore multi-objective problems will be considered and a novel algorithm will be presented, the P-NSGA (Probabilistic Non-dominated sorted Genetic Algorithm), that generalize the NSGA-II, a widely adopted algorithm for multi-objective deterministic optimization. Furthermore the cost of optimization under uncertainty motivates the effort that will be given to High Performance Computing in order to obtain the most efficient solution to perform automatically a large ensemble of computations. We will present Leland, a simulation environment that has been developed to dynamically schedule, monitor and stir the calculation ensemble and extract runtime information as well as simulation results and statistics. Leland is equipped with an auto-tuning strategy for optimal load balancing and fault tolerance checks to avoid failures in the ensemble - features that will be proven to be a necessity in optimization under uncertainty. Game Theory will be investigated and proven to be a possible solution in handling problems of optimization under uncertainty where a lack of knowledge about the variability of several uncertain parameters is taken in account. Two industrial applications will be presented in the development of this thesis: the optimization of the shape of wind turbine blades and the optimization of a Formula 1 tire brake intake. Both problems are multi-objective and the presence of uncertainties significantly impact on the estimation of their responses, hence them are well-suited to assess the theoretical framework and the algorithms that will be presented in this thesis.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.