Abstract

In the last years, the automotive engineering industry has been deeply influenced by the use of «machine learning» techniques for new design and innovation purposes. However, some specific engineering aspects like numerical optimization study still require the development of suitable high-performance machine learning approaches involving parametrized Finite Elements (FE) structural dynamics simulation data. Weight reduction on a car body is a crucial matter that improves the environmental impact and the cost of the product. The actual optimization process at Renault SA uses numerical Design of Experiments (DOE) to find the right thicknesses and materials for each part of the vehicle that guarantees a reduced weight while keeping a good behavior of the car body, identified by criteria or sensors on the body (maximum displacements, upper bounds of instantaneous acceleration …). The usual DOE methodology generally uses between 3 and 10 times the numbers of parameters of the study (which means, for a 30-parameters study, at least 90 simulations, with typically 10 h per run on a 140-core computer). During the last 2 years, Renault’s teams strived to develop a disruptive methodology to conduct optimization study. By ‘disruptive’, we mean to find a methodology that cuts the cost of computational effort by several orders of magnitude. It is acknowledged that standard DoEs need a number of simulations which is at least proportional to the dimension of the parameter space, leading generally to hundreds of fine simulations for real applications. Comparatively, a disruptive method should require about 10 fine evaluations only. This can be achieved by means of a combination of massive data knowledge extraction of FE crash simulation results and the help of parallel high-performance computing (HPC). For instance, in the recent study presented by Assou et al. (A car crash reduced order model with random forest. In: 4th International workshop on reduced basis, POD and PGD Model Reduction Techniques—MORTech 2017. 2017), it took 10 runs to find a solution of a 34-parameter problem that fulfils the specifications. In order to improve this method, we must extract more knowledge from the simulation results (correlations, spatio-temporal features, explanatory variables) and process them in order to find efficient ways to describe the car crash dynamics and link criteria/quantities of interest with some explanatory variables. One of the improvements made in the last months is the use of the so-called Empirical Interpolation Method (EIM, [Barrault et al.]) to identify the few time instants and spatial nodes of the FE-mesh (referred to as magic points) that “explain” the behavior of the body during the crash, within a dimensionality reduction approach. The EIM method replaces a former K-Means algorithm (Davies et al. in IEEE Trans Pattern Anal Mach Intell, 1(2):224–227, 1979) which was processed online, for each ROM. Instead, the computation of EIM method is done offline, once for all, for each simulation. This new method allows us to compute a ROM quite faster, and to reduce the number of features that we use for the regression step (~ 100). The nonlinear regression step is achieved by a standard Random Forest (RF, [Breiman. Mach Learn 45:5–32, 2001]) algorithm. Another improvement of the method is the characterization of numerical features describing the shape of the body, at a nodal scale. The characteristics of orientation of the elements surrounding a mesh node must be taken into account to describe the behavior of the node during the crash. The actual method integrates some numerical features, computed from the orientation of the elements around each node, to explain the node behavior. The paper is organized as follows: The introduction states the scientific and industrial context of the research. Then, the ReCUR Method is detailed, and the recent improvements are highlighted. Results are presented and discussed before having some concluding remarks on this piece of work.

Highlights

  • Scientific context The increasing amount of data produced by the Finite Elements (FE) solvers and the continuous seek for accuracy in the numerical models have led to the need of “understanding the data”

  • Many criteria can be used in the specification of the study: from intrusions to maximum plastic strain in shells elements, each criterion defines a particular requirement for the safety of the occupants

  • This last criterion comes from the factory process: if the thickness of a part has not varied more than ± 10%, we can consider that the processes during car production will not be impacted by the variation, which means that there is no increase of cost

Read more

Summary

Introduction

Scientific context The increasing amount of data produced by the Finite Elements (FE) solvers and the continuous seek for accuracy in the numerical models have led to the need of “understanding the data”. The ability to understand a complex system through a big amount of data, to determine tendencies or correlations by some artificial intelligence algorithms is strategic—let say vital—in industry nowadays. Various numerical methods to reduce the dimensionality of the digital twin of a system have been proposed. There are two main classes of reduction methods. The class of intrusive methods tries to directly reduce the system of equations to solve, by means of projection techniques over a reduced basis of suitable functions (projection-based Galerkin methods, weighted residual methods) or interpolation techniques (collocation methods with suitable reduced-order bases). The process of derivation of intrusive ROMs may be a heavy task involving strong code development efforts into an open source code

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call