Code_Aster ®
Version
7.1
Titrate:
Algorithm of retiming
Date:
26/08/03
Author (S):
NR. TARDIEU Key
:
R4.03.06-A Page
: 1/12
Organization (S): EDF-R & D/AMA
Handbook of Référence
R4.03 booklet: Analyze sensitivity
Document: R4.03.06
Algorithm of retiming
Summary:
In this document the algorithm of retiming of MACR_RECAL is presented. It is about an algorithm of
Levenberg-Marquardt with terminals.
One initially describes the general method before specifying certain elements of them. Are detailed it
calculation of the functional calculus, the Jacobienne matrix, determination of the initial parameter of regularization thus
that its evolution, the management of the terminals and the criterion of convergence.
Handbook of Référence
R4.03 booklet: Analyze sensitivity
HT-66/03/005/A
Code_Aster ®
Version
7.1
Titrate:
Algorithm of retiming
Date:
26/08/03
Author (S):
NR. TARDIEU Key
:
R4.03.06-A Page
: 2/12
Count
matters
1 Introduction ............................................................................................................................................ 3
2 Algorithm of Levenberg-Marquardt ..................................................................................................... 4
2.1 Position of the problem ....................................................................................................................... 4
2.2 Resolution ........................................................................................................................................ 4
3 Implementation practices ......................................................................................................................... 7
3.1 Definition of the functional calculus ........................................................................................................... 7
3.2 Form of the matrix Jacobienne ............................................................................................. 8
3.3 Regularization of the linear system ................................................................................................. 9
3.3.1 Initial value from .................................................................................................................. 9
3.3.2 Evolution of the value from ..................................................................................................... 9
3.4 Limitations of the field of evolution of the parameters ....................................................................... 9
3.5 Adimensionnement ........................................................................................................................ 10
3.6 Criterion of convergence ................................................................................................................. 11
4 total Algorithm ................................................................................................................................. 12
5 Bibliography ........................................................................................................................................ 12
Handbook of Référence
R4.03 booklet: Analyze sensitivity
HT-66/03/005/A
Code_Aster ®
Version
7.1
Titrate:
Algorithm of retiming
Date:
26/08/03
Author (S):
NR. TARDIEU Key
:
R4.03.06-A Page
: 3/12
1 Introduction
Before approaching the problems of retiming strictly speaking, it is useful to recall some
elements on the identification of parameters. Let us suppose that one wishes to identify N parameters to be left
of a given mechanical test. Within the framework of this identification, one defines the sizes:
·
C, the vector of N parameters to be identified, pertaining to O, convex closed N
R.
·
D, the vector of the sizes calculated during a simulation of the test by using them
parameters C, in opposition to exp
D, the vector of the sizes measured during a test
experimental. Both belong to space L of the observable sizes.
simulation of the experimental test, parameterized by the vector C, can be carried out by
various methods: finished differences, finite elements, elements of border,…. It is it
that we will call the direct problem.
The goal of the identification is to determine the set of parameters C reducing the difference enters
sizes measured and experimental (by strongly hoping that the reduction of this variation is
sufficient to obtain the set of parameters wished…). A cost functional calculus is thus introduced
noted J dependant on C and measuring the distance between D and exp
D.
J (c)
exp
= D - D
éq 1-1
where || . || indicate a standard on L.
The identification is thus expressed in the form of the problem of minimization according to:
To determine C
* O such as J (c)
* = Min J (c)
C O
Lastly, one defines retiming as the minimization of a particular type of said functional calculuses
“least squares” which are expressed in the form:
NR
J (c) = j2 éq 1-2
N (c)
n=1
It is commonly allowed that the most effective algorithm of the minimization for this type of
functional calculuses is the algorithm of Levenberg-Marquardt. It is the latter which is established in
order MACR_RECAL of Code_Aster and that we present in the continuation.
Handbook of Référence
R4.03 booklet: Analyze sensitivity
HT-66/03/005/A
Code_Aster ®
Version
7.1
Titrate:
Algorithm of retiming
Date:
26/08/03
Author (S):
NR. TARDIEU Key
:
R4.03.06-A Page
: 4/12
2
Algorithm of Levenberg-Marquardt
2.1
Position of the problem
There are several families of algorithms of minimization [bib1]. For the problems relatively
regular, the most used are the methods of descent. Their principle is to generate in manner
iterative a continuation (ck)
defined by:
K NR
(kc+1) K K K
= C + G
éq 2.1-1
such as, for F (X) = J (K
C + X K
G)
*
, X R +
·
F (X) is decreasing in the vicinity of +
0
·
F (K
) = Min F (X)
x>0
K
G is the direction of descent to the step K. It is the method of determination of K
G which conditions
nature thus effectiveness of the algorithm used, knowing that these techniques are mainly based
on approximations of J to command 1 or command 2. For the algorithm of Levenberg-Marquardt, one
handle an approximation with command 2 of the functional calculus.
2.2 Resolution
Within the framework of retiming, one handles square least cost functional calculuses of the type:
NR
J (c) = j2 éq 2.2-1
N (c)
n=1
where for example J C = F
C - F
, with obvious notations.
N ()
(calc
N
()
exp
N
)
The characteristic of these cost functional calculuses lies in the fact that one knows the form of theirs
derived first and seconds:
(
NR
J
éq
2.2-2
C (c))
J
2
J C
I
()
=
N
N
n=1
Ci
(H (c))
NR
2
=
J
J
J
2
.
J C
éq
2.2-3
ij
N
N + N ()
N
n=1 C C
C C
I
J
I
J
Handbook of Référence
R4.03 booklet: Analyze sensitivity
HT-66/03/005/A
Code_Aster ®
Version
7.1
Titrate:
Algorithm of retiming
Date:
26/08/03
Author (S):
NR. TARDIEU Key
:
R4.03.06-A Page
: 5/12
Then, by supposing that the second term of the preceding equation is negligible in front of the first
(what is true when the J are linear out of C: this term is null), one can rewrite:
K
(H (c))
NR
J
J
2
.
éq
2.2-4
ij
N
N
n=1 C
C
I
J
It is interesting on this level to introduce the matrix of sensitivity or Jacobienne matrix defined by:
J
J
J
1
1
1
…
C
C
C
1
2
N
J
J
2
2
…
…
C
C
To = 1
2
éq
2.2-5
…
…
…
…
…
…
…
…
J
J
NR
NR
jN
…
C
C
C
1
2
N
One can thus express the gradient and Hessien by:
J K
T
= 2
éq 2.2-6
C (c)
With J
H (ck) 2AT A
éq 2.2-7
with J = [J,…, J
.
1
] T
NR
Then let us write the development limited to command 2 of J:
1
J (c) J (K
c)+ (C - c) T
K
. J
+
-
. -
éq 2.2-8
C (K
c)
(C c) T
K
H (K
c) (
K
C c)
2
That is to say K
K
G = C - C, the step of descent at the point K
C, it must check the condition of stationnarity of
the quadratic approximation:
J
éq
2.2-9
C (K
c)+ H (K
c) K
G = 0
According to the expression of the gradient and Hessien of J, one can write:
(ATA) gk = - AT J
éq
2.2-10
Handbook of Référence
R4.03 booklet: Analyze sensitivity
HT-66/03/005/A
Code_Aster ®
Version
7.1
Titrate:
Algorithm of retiming
Date:
26/08/03
Author (S):
NR. TARDIEU Key
:
R4.03.06-A Page
: 6/12
The solution of this equation leads to an algorithm known under the name of Gauss-Newton, very
effective but which presents nevertheless some disadvantages:
·
(ATA) can be almost singular and cause the non-existence of solution.
·
There is no control on K
G, which can be too large and thus leave the parameters
acceptable space.
To mitigate these disadvantages, one prefers to use the algorithm of Levenberg-Marquardt which proposes
a regularization of the algorithm of Gauss-Newton:
(ATA + I) gk = - AT J éq
2.2-11
where is a scalar and I the matrix identity.
It is noticed that if = 0, one finds the direction given per Gauss-Newton and if +, one
find the direction given by the opposite one of the gradient of J i.e the greatest slope.
The algorithm of Levenberg-Marquardt thus consists, on the basis of a value of “raised enough”,
to decrease it by a factor 10 for example, with each decrease of J. One passes thus
gradually of an algorithm of greater slope to the algorithm of Gauss-Newton. One thus can
to present this procedure in the form:
·
Choice of a starting point 0
C and of an initial value of
·
With the iteration K, to solve
(T
With A + I) K
T
G = - A J
K +1
K
K
C
= C + G
·
If J (K
C +1) < J (K
c), then =/10 if not = * 10
·
Test of convergence
Note:
We considered above the algorithm of Levenberg-Marquardt under the angle of
regularization of the algorithm of Gauss-Newton. It is possible to give a lighting
different with this algorithm by regarding it as an algorithm from area of confidence
[bib2]. Indeed, one can show easily that the system [éq 2.2-11] is the condition of
stationnarity of the problem of minimization:
1
K
K T T
K T
T
K
To determine K
G such as G = ArgMing
With
.
J + G A A G subjected to K
K
G D
2
1
-
Where
K
D = - (AT A + I) AT J and 0.
It is a very simple establishment of the algorithm of Levenberg-Marquardt within which
various questions are not tackled:
·
How to define the functional calculus J when one has several tests?
·
How to choose the initial value of?
·
How to make evolve/move in a finer way?
·
How to define the field of evolutions of each parameters?
We will clarify these various points in the continuation.
Handbook of Référence
R4.03 booklet: Analyze sensitivity
HT-66/03/005/A
Code_Aster ®
Version
7.1
Titrate:
Algorithm of retiming
Date:
26/08/03
Author (S):
NR. TARDIEU Key
:
R4.03.06-A Page
: 7/12
3
Implementation practical
3.1
Definition of the functional calculus
At the time of a retiming, the user often has several different measurements; it is about
discrete physical sizes, possibly of different nature, measured during one or
several tests. They are related to a given parameter noted T (time, X-coordinate,…) that one
can thus represent by:
exp
T
F
(T)
exp
T
F
(T)
exp
T
F
(T)
L
exp
1
1
1
exp
1
2
1
exp
1
1
F
=
F
=
F
=
éq 3.1-1
1
M
M
2
M
M
L
M
M
exp
T
F
(T)
exp
T
F
(T)
exp
T
F
(T)
NR
1
NR
M
2
M
P
L
P
Each one of these experimental measurements has sound during
~
calc
K ~
T
F
(C, T)
~
calc
K ~
T
F
(C, T)
~
calc
K ~
T
F
(C, T)
calc
K
1
1
1 calc K
1
2
1 calc K
1
L
1
F
(c) =
F
(c) =
F
(c) =
1
M
M
2
M
M
L
M
M
~
calc
K ~
T
F
(C, T)
~
calc
K ~
T
F
(C, T)
~
calc
K ~
T
F
(C, T)
I
1
I
J
2
J
K
L
K
éq 3.1-2
calculated for a play of parameter K
C given. Let us notice that the calculated sizes are not
inevitably in same number as the measured sizes nor evaluated for the same value of
parameter T. One can then define the functional calculus least squares to be minimized by:
2
2
2
NR
exp
F
(T) - F calc (ck, T)
M
exp
F
(T) - F calc (ck, T)
P
exp
F
(T) - F calc (ck, T)
1
I
1
I
2
I
2
+
I
L
I
L
I
+… +
exp
I 1
=
F
(T)
exp
F
T
F
T
K
1
I
I 1
=
()
exp
2
I
I 1
=
()
J (c)
L
I
=
J (0
c)
éq
3.1-3
It is important to notice that:
·
if a calculated measurement calc
F
is not defined in one moment T, then one interpolates linearly
J
I
its value
·
if an experimental measurement exp
F
is null, one does not divide the quantity
J
exp
F
(T)
F calc
-
(K
C, T) and it are present just as it is in the expression of the functional calculus
J
I
J
I
·
the functional calculus J is standardized so as to be worth 1. at the beginning of the iterations of retiming
Handbook of Référence
R4.03 booklet: Analyze sensitivity
HT-66/03/005/A
Code_Aster ®
Version
7.1
Titrate:
Algorithm of retiming
Date:
26/08/03
Author (S):
NR. TARDIEU Key
:
R4.03.06-A Page
: 8/12
3.2
Form of the Jacobienne matrix
For the calculation of the Jacobienne matrix, one defines the vector J of the errors by:
exp
F
(T) - F calc (K
C, T)
1
1
1
1
exp
F
(T)
1
1
M
exp
F
(T) - F calc (K
C, T)
1
NR
1
NR
exp
F
(T)
1
NR
J = exp
F
(T) - F calc (K
C, T)
2
1
2
1
éq
3.2-1
exp
F
(T)
2
1
M
M
exp
F
(T) - F calc (K
C, T)
P
L
P
L
exp
F
(T)
P
L
That is to say:
exp
F
(T)
F calc
-
(K
C, T)
K
K
I
K
I
J =
éq
3.2-2
I
exp
F
(T)
K
I
One finds the form of the Jacobienne matrix then of [éq 2.2-5]:
j1
j1
j1
1
1
L
1
C
C
C
1
2
N
L
L L L
j1
j1
j1
NR
NR
NR
L
C
C
C
1
2
N
= j2
j2
éq
3.2-3
1
1
L L
C
C
1
2
L
L L L
L
L L L
jP J P
J P
L
L
L
L
C
C
C
1
2
N
Where the terms are calculated by direct finished differences:
j1
j1 (C,…, C + C,…, c) _ j1 (C,…, C,…, c)
1
1
1
I
I
N
1
1
I
N
(C,…, C,…, c)
éq
3.2-4
1
I
N
C
C
I
I
Handbook of Référence
R4.03 booklet: Analyze sensitivity
HT-66/03/005/A
Code_Aster ®
Version
7.1
Titrate:
Algorithm of retiming
Date:
26/08/03
Author (S):
NR. TARDIEU Key
:
R4.03.06-A Page
: 9/12
3.3
Regularization of the linear system
We tackle here the problem of the determination and the evolution of the parameter of regularization
. One defines with this intention:
·
= max (Valeurs clean of AT A),
= Min (Valeurs clean of AT A),
max
min
cond =
/
if
0
max
min
min
T
1
T
·
Q (c) = J (K
c)+ (
K
C - c)
T
With
.
J + (
K
C - c) (T
. WITH A + I) (
K
. C - c)
2
3.3.1 Initial value of
Knowing the sizes above, the following algorithm is defined:
·
If
= 0, then =1.E-3
min
max
·
If not
- If cond < 1.E5, then = 1.E-16
max
- If not = ABS (1.E5
-
)/10001
min
max
Note:
In the last case, the value allotted to A for effect to bring back the conditioning of
AT A with 1.E5
3.3.2 Evolution of the value of
J (K
c) - J (K 1
+
c)
To make evolve/move, the ratio K is defined
R =
, which makes it possible to evaluate the validity of
Q (K
c) - Q (K 1
+
c)
the quadratic approximation of J: the closer it is to 1, plus this approximation is valid. One in
deduced the following algorithm [bib2]:
·
If K
R < 0.25, then = * 10
·
If K
R > 0.75, then =/15
3.4
Limitations of the field of evolution of the parameters
For various reasons such as guaranteeing the physical validity of the parameters (module
of strictly positive Young, Poisson's ratio ranging between 0 and 0.5,…), it is necessary of
to limit their field of evolution. One thus imposes that C remains in an acceptable field O,
convex closed
N
R. Ceci thus imposes constraints on the parameters:
C
> ck + gk > C
sup
inf
Handbook of Référence
R4.03 booklet: Analyze sensitivity
HT-66/03/005/A
Code_Aster ®
Version
7.1
Titrate:
Algorithm of retiming
Date:
26/08/03
Author (S):
NR. TARDIEU Key
:
R4.03.06-A Page
: 10/12
After dualisation of these conditions by introduction of the multipliers of Lagrange µ
and µ
, one
inf
sup
solves the system:
To find K
G µ µ such as
inf
sup
(T
With A + I) K
G + µ + µ
= - T
With J
inf
sup
K
C + K
G > C
inf
µ > 0
inf
(K
C + K
G - c) (µ) = 0 I =,
1
[N]
inf I
inf I
K
C + K
G < csup
µ < 0
sup
K
K
(C + G - c) (µ) = 0 I =,
1
[N]
sup I
sup I
This resolution is carried out using an algorithm of active constraints. For any precision on this
algorithm, to refer to [bib3] or [bib4].
3.5 Adimensionnement
One is often brought to identify parameters of various physical nature. Commands of
size of these parameters can be extremely different. This can generate the very strong ones
differences in the orders of magnitude of the components of the gradient and Hessien of
functional calculus cost and to compromise the resolution.
To mitigate this difficulty, it is imperative of adimensionner the unknown factors before beginning
resolution. Here a simple and effective procedure.
T
That is to say 0
C = [0 0
0
C, C,…, C, the initial vector of the sizes to be rebuilt. The matrix is defined
1
2
N]
adimensionnement D:
c0
1
0
c2
D =
éq
3.5-1
c0n-1
c0n
Then, if 0
C all are nonnull, one can define the adimensional unknown factors by:
I
~0
1
-
0
C = D.C éq
3.5-2
In the same way, an adimensional cost functional calculus is introduced:
~ ~
~
J (c) = J (.
D c) = J (c) éq
3.5-3
Handbook of Référence
R4.03 booklet: Analyze sensitivity
HT-66/03/005/A
Code_Aster ®
Version
7.1
Titrate:
Algorithm of retiming
Date:
26/08/03
Author (S):
NR. TARDIEU Key
:
R4.03.06-A Page
: 11/12
Like its gradient:
~ ~
~
~
J (c) J (.
D c)
J (c)
~
C
~J (c) =
=
=
.
=.
D J (c)
C
~
~
~
éq
3.5-4
C
C
C C
C
And its Jacobienne matrix:
~
0
With = A * C
éq
3.5-5
ij
ij
J
From an algorithmic point of view, the calculation of the Jacobienne matrix is done classically with
functional calculus J, then it is adimensionnée as well as the current parameters C, before being
transmitted to the algorithm of minimization. At the output of this last, the parameters C
~ is
redimensionnés to allow the calculation of the functional calculus J.
3.6
Criterion of convergence
The criterion of convergence used in MACR_RECAL consists in testing the decrease of the gradient of
the functional calculus. It is pointed out that the use of this criterion is naturally justified by the fact that
the objective of the algorithm of retiming is to cancel this gradient.
|| J (K
c) ||
C
<
Prec.
éq
3.6-1
|| J (0
c) ||
C
Where Prec is by defect taken equal to 1.E-3.
Handbook of Référence
R4.03 booklet: Analyze sensitivity
HT-66/03/005/A
Code_Aster ®
Version
7.1
Titrate:
Algorithm of retiming
Date:
26/08/03
Author (S):
NR. TARDIEU Key
:
R4.03.06-A Page
: 12/12
4 Algorithm
total
So as to clarify the sequence of the various operations described above, one presents
formally the algorithm of retiming:
·
Initializations
- K = 0
- calculation of A, adimensionnement of A
- Calculation of initial
- Total Iterations
- Adimensionnement of K
C
- Resolution of the equation of Levenberg-Marquardt
- Imposition of the respect of the terminals
- Redimensioning of K 1
+
C
- Calculation of J (K 1+
C
)
- Updating of
- Calculation of A, adimensionnalisation of A
- Test of convergence
- K = K + 1
·
End
5 Bibliography
[1]
J.C. CULIOLI: “Introduction to optimization”, Ellipses, 1994
[2]
Mr. S. BAZARAA, H.D. SHERALI & C.M. SHETTY: “Nonlinear Programming, Theory and
Algorithms “, Wiley & Sounds, 1979
[3]
“Contact by conditions kinematics”, Document Code_Aster [R5.03.50]
[4]
K. KUNISCH & F. RENDL; “Infeasible Year activates set method for quadratic programming with
simple bounds “, SIAM Journal one Optimization, Volume 14, Number 1, 2003
Handbook of Référence
R4.03 booklet: Analyze sensitivity
HT-66/03/005/A
Outline document