E 05 Occupant Safety II - Kayvantash - Cadlm
E 05 Occupant Safety II - Kayvantash - Cadlm
1 Introduction
Model reduction techniques (or Reduced Order Modelling - ROM) are algebraic solutions for reducing
the volume of a data set while preserving the most important parts of the information contained within
the data, necessary for retrieving all or the most essential part of the data when needed. This is
commonly done via decomposition or clustering or other efficient data compression techniques. Such
techniques allow for creating on-board and real-time applications based on voluminous experimental
or simulation results (ex. Finite element).
In this paper we shall present the major idea behind the reduction (or fusion) methods as well as
providing three potential applications for crash and safety simulations. The results are obtained by
ODYSSEE (Lunar) software [2] and compared with LS-DYNA FEM results.
Some reduction techniques are based on reducing the order of the operators representing the set of
algebraic operations resulting from discretization of differential equations. These are commonly
referred to as PGD (Proper Generalized Decomposition) and are only applicable to reducing the order
of partial differential equations (PDF). These are considered as “intrusive” since they require extensive
modifications or even rewriting of solver code. However, recent research has shown that using
machine learning techniques other physical phenomena - for which a PDF (partial differential
equation) is not known, or does not exist, or not fully or efficiently handled from a computation effort
point of view - may also be explored and predicted using ROM methods. In particular, “non-intrusive”
or a posteriori techniques of results post-processing are of interest. These methods (such as POD)
require simply the establishment a new vector base (orthogonal or reference vectors) as opposition to
the original parameter/temporal related and a projection of the known model or experimental results
on that new base. This technique is similar to a PCA (Principal Component Analysis) projections.
Independent of the method selected, a ROM technique implies three steps: Decomposition, Reduction
and Reconstruction. In the next paragraphs we shall describe shortly these steps and then proceed
with three crash and safety related applications.
X = Table of design parameters (or variables) defining experiments (we also call this a DOE)
Y = Results of the experiments associated with X
XN = A new table of parameters for which we seek results without actually performing
experimenting or computing
YN = Results of the new experiments associated with XN (to be computed by ROM)
G, H, S = Decomposition matrices of which Gr, Hr, Sr are sub-matrices.
y = Spatial or latent metrics of system (sometime called coefficients)
t = time
m, n, p, r = indices (dimensions of matrix)
Note – The N in (XN and YN) stands for NEW since we do not include XN, which is known, in the
decomposition process and we don’t know the YN (since we want to predict it).
2.1 Decomposition
Assuming we have a set of data representing a time dependent phenomena such as Y(y, t), where y
represent a “spatial” but latent variables associated to the response.
Y(y, t) = G(y). s . H(t) where G is a “spatial” or “latent” operator (a matrix) , s is a transfer matrix
(diagonal or semi-diagonal) and H is a “temporal” or “modal” operator (a matrix).
Let’s assume also that we know of another set of data X (representing a sampling table of some
properties or criteria which have resulted in the variations of the responses. In short, X represents a
Design of Experiments and Y the corresponding outcome at the X sampling points. We are interested
in establishing a function (the ROM operator) which relates Y to X as in the case of a “supervised”
learning algorithm.
Presented in matrix form, if the original data are of dimensions Xm,n and Ym,p then the decomposed
matrices have the dimensions Gm,m, Sm,p and Hp,p.
If the above decomposition exists, and we may explore this existence by some matrix algebra
methods such as SVD (Singular Value Decomposition) or Machine Learning (such as clustering, etc.)
then we can claim to have reduced (or projected) the original data set (on) to a new set of basis with
special and very useful properties such as orthogonality.
In particular by applying the decomposition and the projection we have separated the variables y and t
(similar to modal decomposition techniques in transient dynamic problems). We can claim to have
obtained a decoupled version of the original set of data. Secondly, we can now perform separate
operations on any of the two major components (matrices G or H) in order to further exploit the data,
in association with many available interpolation techniques.
2.2 Reduction
In general matrix s contains either a set of singular values (if SVD based decomposition is performed)
or some cluster coupling coefficients (if clustering is performed) allowing to relate the spatial and
temporal components of Y. It is possible to consider only subsets of G and H based on some
tolerance criteria (such as the ratio of diagonal terms of matrix s) captivating only the most important
part of the two matrices. We may call these subsets or reduced matrices Gr and Hr.
In this case only the “reduced” matrices need be considered with important consequences for CPU
and storage issues which are essential for on-board computing solutions. In matrix form, if the original
data X and Y are of dimensions Xm,n and Ym,p then the decomposed matrices may have the dimensions
Gm,r , Sr,r and Hrxp (where r represents the number of retained modes or clusters). We can observe that
if a new set of parameters is experimented such as XNq,n, the reduced matrices still provide a result
matrix YNm,p (same order as Ym,p).
At any instance we can either reconstruct the original set of data either completely (using decomposed
operators) without loss or partially (using reduced operators) with some loss of information. A partial
reconstruction would mean that we only retain selected parts of G and H, a technique which can be
assimilated to filtering. This allows a reduction of the volume of the “on-board” data as well as a
filtering of noise or other high frequency sources of uncertainty in the results.
2.3 Reconstruction
In case of predictions which require the results YN for a new parameter set XN, we observe that we
can replace G or H, by their modified updates G’ or H’, taking into account of the change in the
parameters, and obtain a new set of response corresponding to slightly modified new positions XN
compared to the original X. In simple terms we can make predictions of effect of X moving to XN on Y
(moving to YN) via considering its effect on G, moving to G’ or H moving to H’ only. If we need to
compute YN, we need to compute the effect of XN on G or H. It turns out that the updates G’ or H’
may be simply obtained by an adequate interpolation technique such as radial basis functions, kriging,
etc. A final back multiplication provides the results YN.
Lastly, it is important to point out that it is possible to mix the experimental and numerical results and
construct a unified data base (Y experiments+simulation) in which case we could call the data base a fusion of
real and virtual information. This alone open ups new horizons in modelling which until now was
always suffering from “correlation” and “validation” issues. We can claim that we have a new tool
which benefits from the best of both experimental and numerical technologies.
3 Applications
Three applications will be presented hereafter showing different potential of the proposed method and
solution software. The starting point is, as in many other parametric design projects, the construction
of the DOE samples and the time dependent response of the system for each case present in the
DOE table (Figure 1.). Note that the response may be measured experimentally or computed via
discretization techniques such as Finite Elements, etc.
For all three applications a Desktop version of the software ODYSSEE (Lunar) has been used
running on a Laptop (DELL M4800, INTEL i7 4910 MQ, 2.9GHz, 2.9GHz, 16 GB RAM) was used.
Fig.2: Computing influence lines for moving support point – Black square points represent the DOE
samples and the red circles represent the prediction points.
Fig.3: Computing influence lines for moving support point at four combinations of model parameters
(colored circle points) – Results (Finite Elements versus ROM)
Fig.4: Plate-sphere impact (representing bonnet-head in pedestrian case) including possible rupture
of composite material – Results (Finite Elements versus ROM)
Fig.5: Sled test + airbag with variations on airbag mass injection and impact velocity - DOE with 3
parameters, 9 (OLH) + 6 “space filling” samples
Fig.6: Results of sled test comparing FEM (LS-DYNA) and ROM (ODYSSEE/Lunar). (1)
Results obtained by ROM lie within 1-10% of the finite element counterparts. The computing time for
ROM models is often in the order of seconds whereas the FE models range from minutes of
computing (applications 1 & 2) to hours for sled test model. In practice any new set of parameters
could be studied in a matter of seconds or “quasi real-time” with a gain of many orders of magnitude in
terms of computation time.
Finally, comparative performance and portability studies were also investigated on an on-board
version (Raspberry PI 3) of ODYSSEE/Quasar (The results will be published in a separate work). It
has been shown that nearly all the presented applications may be conducted on a very small CPU
while the current limitations concern the storage available on such on-board devices.
6 Literature
[1] Kayvantash, K.: "European LS-DYNA Conference", Wurzburg, Germany, Proceeding, 2015
[2] Kayvantash, K.: "ODYSSEE (Quasar/Lunar) software package for Machine learning, Model
Fusion, Forecasting and ", CADLM, www.cadlm.com , Wissous, France, Release 9/2017
[3] Le Duc, T., Kayvantash, K.: "Quasar Online software implementation for web-based Model
Reduction and Machine learning", CADLM, www.cadlm.quasaronline , Wissous, France,
Release 10/2018
[4] Ovazza, H., Kayvantash, K.: "An onboard implementation of Quasar software for Raspberry PI 3
for Lidar/ Radar/ MobileEye data - Fusion and forecasting of Autonomous Vehicle Data
recordings", Internal Report, CADLM, www.cadlm.com , Wissous, France, 8/2018
[5] Yasuki, T.: “Application of reduced model to estimating Nij of HYBRID3 AF05 dummy in sled FE
simulation, DYNA conference, Salzburg, Austria, 2017
[6] Yasuki, T.: “Application of Reduced Model to Simulations of Occupant Protection and
Crashworthiness at Toyota", Advanced CAE Division, Toyota Motor Corporation,15th
International LS-DYNA® Conference & Users Meeting, Detroit, USA, 2018