Optimization
Optimization
Objective
The basic objective of any optimization method is to find the values of the system
state variables and/or parameters that minimize some cost function of the system.
Some examples:
Minimize:
• the error between a set of measured and calculated data,
• active power losses,
• the weight of a set of components that comprise the system,
• particulate output (emissions),
• system energy, or
• the distance between actual and desired operating points
Formulation
Minimize f ( x, u ) x∈R n
u ∈ Rm
Subject to
g ( x, u ) = 0 (equality constraints) Power flow equations
where x is the vector of system states and u is the vector of system parameters.
In many systems, more measurements are made than are necessary to uniquely
determine the operating point.
State estimation gives the “best estimate” of the state of the system in spite of
uncertain, redundant, and/or conflicting measurements.
Let z be the set of measured quantities
The measured values of z may not equal the true values of ztrue due to
• measurement errors,
• missing data, or
• intentional misdirection (hacking)
We’d like to
• Identify bad data
• Minimize the error between the true and measured values
e =−
z z true =−
z Ax
States of the system
“squared error”
2
m m
Minimize e = e ⋅ e = ∑ zi − ∑ aij x j
2 T
=i 1 = j 1
Minimizing the square of the error is desired to negate any effects of sign
differences between the measured and true values.
( z − Ax ) ( z − Ax )
T
e ⋅e =
T
Minimize
(
= z T
− x A ) ( z − Ax )
T T
=z z − z Ax − x A z + x A Ax
T T T T T T
=
z T z − 2 xT AT z + xT AT Ax
Minimize z T z − 2 xT AT z + xT AT Ax
Solving for x:
AT Ax = AT z Solve using LU factorization (or other linear solver)
ˆ = bˆ
Ax
( )
−1
x= A A T
AT z Analytic solution
R=
1 R=
3 R=
5 1.5Ω
Find the node voltages R=
2 R=
4 1.0Ω Circuit equations
V1 and V2 −V1 + R1 z1 + z3 =0
−V2 − R5 z2 + z4 =
0
z3 / R2 − z1 + ( z3 − z4 ) / R3 =
0
z4 / R4 + z2 + ( z4 − z3 ) / R3 =
0
Yielding:
z1 0.4593 −0.0593
z 0.0593 −0.4593 V
2 = 1
z3 0.3111 0.0889 V2
4
z 0.0889 0.3111
( )
−1
Least squares estimation: x = A A
T
AT z Analytic solution
−1
0.4593 −0.0593 T
0.4593 −0.0593 0.4593 −0.0593
T
z1
0.0593 −0.4593 0.0593 −0.4593 z
V1 0.0593 −0.4593 2
V = 0.3111 0.0889 0.3111 0.0889 0.3111 0.0889 z3
2
0.0889
0.3111 0.0889 0.3111 0.0889 0.3111 z4
Solving for voltages: V1 = 9.8929
V2 = 5.0446
=i 1 = j 1
In matrix form:
( A WA)
−1
T
= A Wz
A WAx T
⇒ =
x T
AT Wz
Let 100 0 0 0
0 100 0 0
W = The currents are weighted more
0 0 50 0 heavily than the voltages
0 0 0 50
1
σ 2 0 … 0
1
1
0 … 0
= −1
W R= σ 22
0 1
0 …
σ m2
A significance level of 0.05 indicates there is a 5% likelihood that bad data exist, or
conversely, a 95% level of confidence in the goodness of the data.
Test for Bad Data
1. Use z to estimate x
m=4
1
f =∑ ei2
i =1 σi
= 100(0.0141) 2 + 100(0.0108) 2 + 50(−0.0616) 2 + 50(0.0549)
= 0.3720
∑ w z − h ( x )
2
Minimize e 2
= e ⋅e =
T
i i i
i =1
Nonlinear equations
The approach to finding the states (and inputs) that minimize the weighted,
squared error is similar to the linear case.
The state values that minimize the error are found by setting the derivatives of
the error function to zero:
) H xT R −1 [ z − h( x=
F ( x= )] 0
where solve using a nonlinear solution
∂h1 ∂h1 ∂h1 method such as Newton-
∂x …
∂xn
Raphson
∂x2
1
∂h2 ∂h2 ∂h2
…
H x = ∂x1 ∂x2 ∂xn
∂hm ∂hm
…
∂hm
∂x1 ∂x2 ∂xn
Using Newton-Raphson to solve the nonlinear system of equations requires the
Jacobian matrix:
∂
J F ( x) = T
H ( x) R [
−1
z − h ( x ) ] =
− H T
( x ) R −1
H x ( x)
∂x
x x
( ) ( )
H xT x k R −1 H x x k x=
k −1
− x k
H T
x x ( )
k
R −1
z − h ( x k
)
whole expression must be ≈ 0 for convergence
Linear Programming
minimize f ( x) = cT x
subject to Ax ≤ b
x≥0
• Any linear programming problem described by (A, b, c), there exists
another equivalent, or dual problem (−AT ,−c,−b).
feasible region
minimum cost
The simplex method is an organized search of the vertices by moving along the
steepest edge of the polytope
Comments:
− aT x ≤ − β
4. if a problem does not require xi to be nonnegative, then xi can be replaced by
the difference of two variables xi = ui ‒ vi where ui and vi are nonnegative.
If a linear programming problem and its dual both have feasible points (i.e.
any point that satisfies Ax ≤ b, x ≥ 0 or −AT y ≤ −c, y ≥ 0 for the dual
problem), then both problems have solutions and their values are the
negatives of each other.
Simplex Method
cT 0 0
A I b
0 bT
Rules:
1. If all coefficients in f(x)=cTx (i.e. top row of the tableau) are greater than or
equal to zero, then the current x vector is the solution.
2. Select the nonbasic variable whose coefficient in f(x) is the largest negative
entry. This variable becomes the new basic variable xi.
3. Divide each bi by the coefficient of the new basic variable in that row, aij. The
value assigned to the new basic variable is the least of these ratios
x j = bk / akj
4. Using pivot element akj, create zeros in column j of A with Gaussian
elimination. Return to 1.
Example:
Minimize:
f ( x) : − 6 x1 − 14 x2
Subject to:
2 x1 + x2 ≤ 12
2 x1 + 3 x2 ≤ 15
x1 + 7 x2 ≤ 21
x1 ≥ 0, x2 ≥ 0
feasible region
Rewrite:
Minimize:
f ( x) : − 6 x1 − 14 x2 − 0 x3 − 0 x4 − 0 x5
Subject to:
2 x1 + x2 + x3 =
12 basic variables
2 x1 + 3 x2 + x4 =
15
x1 + 7 x2 + x5 =
21
x1 ≥ 0, x2 ≥ 0, x3 ≥ 0, x4 ≥ 0, x5 ≥ 0
Tableau c Tx
I
-6 -14 0 0 0 0 b
2 1 1 0 0 12
A 2 3 0 1 0 15
1 7 0 0 1 21
0 0 12 15 21
x1 = 0
x2 = 0
x3 = 12
x4 = 15 f ( x) =
−6 x1 − 14 x2 =
0
x5 = 21
x1 x2 x3 x4 x5 f(x)
-6 -14 0 0 0 0
2 1 1 0 0 12
2 3 0 1 0 15
1 7 0 0 1 21
0 30 9
12 6
15 0
21
Increasing
Set x2 x=2 3will decrease
(with andmost
x1 = 0)f(x) rapidly, therefore hold x1 constant at zero and let
solve:
x2 increase.
x3 = 12 − x2 = 9
2 x1 + x2 + x3 = 12 0 ≤ x3 = 12 − x2 x2 ≤ 12
x4 =15 − 3 x2 =6
2 x1 + 3 x2 + x4 =15 0 ≤ x4 = 15 − 3 x2 x ≤ 5
x5 =21 − 7 x2 =0 x5 is now a non-basic variable and x2 2
is a basic variable
x1 + 7 x2 + x5 = 21 0 ≤ x5 = 21 − 7 x2 x2 ≤ 3
most restrictive
x1 x2 x3 x4 x5 f(x)
-4
-6 -14
0 0 0 20 -42
0
2 1 1 0 0 12
2 3 0 1 0 15
1 7 0 0 1 21
0 3 9 6 0
(21 − x5 − x1 )
x2 = f ( x) =
−6 x1 − 14 x2
7
(21 − x5 − x1 )
=
−6 x1 − 14
7
=
−4 x1 + 2 x5 − 42 ⇒ f ( x) =
−42
x1 x2 x3 x4 x5 f(x)
-4 0 0 0 2 -42
2 1 1 0 0 12
2 3 0 1 0 15
1 7 0 0 1 21
0 3 9 6 0
(21 − x5 − x1 ) 13 1
2 x1 + + x3 ≤ 12 x1 − x5 ≤ 9
7 7 7
(21 − x5 − x1 ) 11 3
2 x1 + 3 + x4 ≤ 15 x1 − x5 ≤ 6
7 7 7
doesn’t change
x1 + 7 x2 + x5 ≤ 21 1 1
except to normalize x1 + x2 + x5 ≤ 3
7 7
x1 x2 x3 x4 x5 f(x)
-4 0 0 0 2 -42
13/7 0 1 0 -1/7 9
11/7 0 0 1 -3/7 6
1/7 1 0 0 1/7 3
0 3 9 6 0
(21 − x5 − x1 ) 13 1
2 x1 + + x3 ≤ 12 x1 − x5 ≤ 9
7 7 7
(21 − x5 − x1 ) 11 3
2 x1 + 3 + x4 ≤ 15 x1 − x5 ≤ 6
7 7 7
x1 + 7 x2 + x5 ≤ 21 1 1
x1 + x2 + x5 ≤ 3
7 7
x1 x2 x3 x4 x5 f(x)
-4 0 0 0 2 -42
13/7 0 1 0 -1/7 9
11/7 0 0 1 -3/7 6
1/7 1 0 0 1/7 3
0 3 9 6 0
Return to Step 1.
The cost function will only decrease for an increase in x1: set x5 to zero and let x1 increase.
13 63
New constraints: 0 ≤ x3 =9 − x1 x1 ≤
7 13
11 42
0 ≤ x4 =6 − x1 x1 ≤ most restrictive
7 11
0 ≤ 7 x2 =21 − x1 x1 ≤ 21
x1 x2 x3 x4 x5 f(x)
-4 0 0 0 2 -42
13/7 0 1 0 -1/7 9
11/7 0 0 1 -3/7 6
1/7 1 0 0 1/7 3
0 3 9 6 0
42 27
Set x1 = and solve for remaining variables: x2 =
11 11
21
x3 =
11
x=
4 x=
5 0 New nonbasic variables
x1 x2 x3 x4 x5 f(x)
0
-4 0 0 28/11
0 2 -630/11
-42
0
13/7 0 1 -13/11
0 -1/7 21/11
9
1
11/7 0 0 7/11
1 -3/11
-3/7 42/11
6
0
1/7 1 0 -1/11
0 1/7 27/11
3
42/11
0 27/11
3 21/11
9 06 0
630
Rewrite f (x) in terms of nonbasic variables: f min ( x) = −
11
f ( x) =
−4 x1 + 2 x5 − 42 Both coefficients > 0, therefore no
more reduction in f (x) is possible
Solve for x1 in terms of x4:
f ( x) =
−4 x1 + 2 x5 − 42
7
=
x1 ( 6 − x4 ) =
28
x4 + 2 x5 −
630
11 11 11
Interior Point Method
feasible region
x k +=
1
xk + α pk
One of the key aspects of the interior point method is that a transformation is applied
such that the current feasible point is moved (through the transformation) to the center
of the interior.
The new direction is then computed and the interior point is transformed back to the
original space. Each pk must be orthogonal to the rows of A
The projection matrix P
P= I − AT ( AAT ) −1 A
will transform any vector v into Pv=p
The transformation will also center the new iterate in the feasible space through scaling.
1
Primal Affine Interior Point Method
1. Let k = 0.
2. Let D = diag(xk)
=
3. Compute Aˆ =
AD , cˆ Dc
4. Compute Pˆ= I − A ˆ ˆ T ) −1 Aˆ
ˆ T ( AA
5. Set p = Pc
ˆˆ
k
6. Set θ = − min j p kj
α k
k +1
7. Compute xˆ = e + p
θ
k +1 k +1
8. Compute x = Dx
ˆ
9. If x
k +1
− x k ≤ ε then done. Else set k=k+1. Go to step 2.
Example:
Minimize:
f ( x) : − 6 x1 − 14 x2
Subject to:
2 x1 + x2 ≤ 12
2 x1 + 3 x2 ≤ 15
x1 + 7 x2 ≤ 21
x1 ≥ 0, x2 ≥ 0
Minimize:
f ( x) : − 6 x1 − 14 x2 − 0 x3 − 0 x4 − 0 x5
Subject to:
2 x1 + x2 + x3 =
12
2 x1 + 3 x2 + x4 =
15
x1 + 7 x2 + x5 =
21
x1 ≥ 0, x2 ≥ 0, x3 ≥ 0, x4 ≥ 0, x5 ≥ 0
[ ]
Choose a feasible initial starting point: x o = 1 1 9 10 13 z 0 = cT x 0 = −20
p 0 = − Pc
ˆˆ
0.9226 −0.0836 −0.1957 −0.1595 −0.0260 −6
−0.0836 0.7258 −0.0621 −0.2010 −0.3844 −14
= − −0.1957 −0.0621 0.0504 0.0578 0.0485 0
−0.1595 −0.2010 0.0578 0.0922 0.1205 0
−0.0260 −0.3844 0.0485 0.1205 0.2090 0
4.3657
9.6600
= −2.0435
−3.7711
−5.5373
Calculate θ = − min j p 0j = 5.5373.
−1 0
=
Rescale the current iterate to x̂ 0 D= x e
and move to x̂ in the transformed space with α = 0.9:
1
1 1.7096
1 2.5701
xˆ1 =1 + α p 0 =0.6679
1 0.3871
1 0.1000
Transforming this point back to the original space:
1.7096
2.5701
= =ˆ1 6.0108
x1 Dx
3.8707
1.3000
(1)
(2)
(0)
Consider the case in which there are no inputs and neglecting inequalities:
minimize f ( x) x ∈ n
subject to c( x) = 0
This is solved by taking the derivatives and setting them equal to zero:
∂C * ( x) ∂f ( x) ∂c( x)
=0 = −λ
∂x ∂x ∂x
∂C * ( x)
0= = −c( x) enforces equality constraint
∂λ
Example: Minimize C :
1 2
2
(
x + y2 )
Subject to: 2 x − y =5
*
C :
1 2
2
( 2
)
x + y − λ ( 2 x − y − 5)
minimize C ( x, u ) : f ( x, u ) − λ c ( x, u ) x ∈ u ∈
* T n m
∂C *
Taking derivatives:
= 0= c( x, u )
∂λ
∂C ∂f ∂c
* T
= 0= − [ λ ]
∂x ∂x ∂x
∂C * ∂f ∂c
T
= 0= − [ λ ]
∂u ∂u ∂u
Power System Applications
Optimal Power Flow
The optimal power flow problem is to formulate the power flow problem to find
system voltages and generated powers within the framework of the objective
function.
In this application, the inputs to the power flow are systematically adjusted to
maximize (or minimize) a scalar function of the power flow state variables.
The two most common objective functions are minimization of generating costs and
minimization of active power losses.
The case in which active power losses in the system are neglected, the optimal
power flow problem is typically called the Economic Dispatch problem.
The individual generating unit costs are given as quadratic functions of generating
power Pi
Minimize ∑ Ci ( Pi )=
i∈PV
∑ i 0 i1 i i 2 i
k
i∈PV
+ k P + k P 2
Subject to: ∑ P −P
i∈PV
i T =
0
i∈PV
Taking derivatives:
0 =k11 + k12 P1 − λ
k12 −1 P1 −k11
0=k21 + k22 P2 − λ
Linear system k22 −1 P2 −k21
of equations =
0= km1 + km 2 Pm − λ
km 2 −1 Pm −km1
=
0 PT − ∑P
i∈PV
i 1 1 1 0 λ PT
Equal incremental cost rule
Note that
λ k11 + k12 P1
= λ is called the “incremental cost” and this problem solution
λ k21 + k22 P2
= is known as the “equal incremental cost” rule.
λ km1 + km 2 Pm
= The optimal solution occurs when the incremental cost for
each generator is the same. That implies that an
incremental change in any generator’s output will result in
the same incremental cost increase.
Example: Three generators with the following cost functions serve a load of 952 MW.
Assuming a lossless system, calculate the optimal generation scheduling.
0.125 0 0 −1 P1 −1
0 0.025 0 −1 P −1
Using the equal 2 =
incremental cost rule:
0 0 0.050 −1 P3 −1
1 1 1 0 λ 952
Solving: P1 = 112 MW
P2 = 560 MW
P3 = 280 MW which yields a generating cost of $7,616/hr
λ = $15/MWhr
λ=
1 + 0.125 P1 =
1 + 0.025 P2 =
1 + 0.050 P3
The value of λ can provide insight into whether or not a utility should buy or
sell generation.
If a buyer is willing to pay $16/MW hr for generation, how much excess generation
should be produced and sold, and what is the profit for this transaction?
Rewriting: λ=
1 + 0.125 P1 =
1 + 0.025 P2 =
1 + 0.050 P3
P1 8 ( λ − 1)
=
P2 40 ( λ − 1)
=
P3 20 ( λ − 1)
=
68 MW × $16 /MWhr =
$1088/hr
Therefore, the total cost is $8,670–1,088 = $7,580/hr. This amount is $34/hr less
than the original cost of $7,616/hr; thus, $34/hr is the profit achieved from the sale
of the excess generation at $16/MW hr.
Utility incremental cost table:
What incremental
corresponds to
10,000 MW? Total
Units generation
1222 1222
+160+240 1622
+240+310 2172
+320+380+220 3092
+390+450+410 4342
+470+520+590 5922
+540+587+608+100+20 7777
+587+110+130+30 8634
+170+160+50 9014
+230+200+60 9504
+502+290+230+80 10606
From equal incremental cost rule
Cost function: f : C1 + C2 + C3 =
P1 + 0.0625 P12 + P2 + 0.0125 P22 + P3 + 0.0250 P32
3
Equality constraints: g1 : 0 = P2 − PL 2 − V2 ∑ ViY2i cos (θ 2 − θi − φ2i )
i =1
3
g 2 : 0 = P3 − PL 3 − V3 ∑ ViY3i cos (θ3 − θi − φ3i )
i =1
Find matrices:
∂g 1 0
∂u = 0 1 ∂g1
= V2 (V1Y12 sin (θ 2 − θ1 − φ21 ) + V3Y13 sin (θ 2 − θ3 − φ23 ) )
∂δ 2
∂g1 ∂g1
∂g1
∂g 2
∂θ ∂θ3 =−V2V3Y32 sin (θ 2 − θ3 − φ23 )
∂δ 3
∂x = ∂g ∂g 2 System Jacobian
2 ∂g 2
∂θ ∂θ3 =−V3V2Y23 sin (θ3 − θ 2 − φ32 )
2 ∂δ 2
∂g 2
= V3 (V1Y13 sin (θ3 − θ1 − φ31 ) + V2Y23 sin (θ3 − θ 2 − φ32 ) )
∂δ 3
Continuing:
∂f 1 + 0.025 P2
∂u = 1 + 0.050 P
3
∂f ∂f ∂P1 ∂f
∂x = ∂P ∂x = 1 + 0.125 P1
1 ∂P1
P1 V1 (V1Y11 cos (θ1 − θ1 − φ11 ) + V2Y12 cos (θ1 − θ 2 − φ12 ) + V3Y13 cos (θ1 − θ3 − φ13 ) )
Let 80 ≤ P1 ≤ 1200 MW
450 ≤ P2 ≤ 750 MW
150 ≤ P3 ≤ 250 MW exceeded
If any Pi exceeds its limit, set it to its limit (i.e. P3 = 0.25 pu) and reduce the number of
inputs by 1
The new partial derivatives become: P1 147
P = 556 MW
∂g 1 ∂f
∂u = [1 + 0.025 P2 ]
2
∂u = 0 P3 250
solving
Let P23 ≤ 100 MW (the powerflow in line 2-3 – originally 121.1 MW)
Two approaches:
1. Soft constraint – violating the limiting factor is undesirable
2. Hard constraint – violating the limiting factor is catastrophic
Soft constraint: Use Penalty Functions
A penalty function is a function that is small for values less than the limit, but
increases rapidly if the limit is violated
=
p (h) e kh k > 0
=p (h) x 2 n e kh n, k > 0
p ( h) =
ax 2 n e kh + be kh n, k , a, b > 0
Let p (h) = ( h − hmax ) where h = P23 = V2V3Y23 cos (θ 2 − θ3 − φ23 ) − V22Y23 cos φ23
2
P1 141.3
P = 491.9 MW with cost: $7,781/MWhr
2
P3 319.7
However, the upper and lower limit cannot be simultaneously violated; thus, out of
the possible set of additional Lagrangian multipliers only one of the two will be
included at any given operating point and thus the dual limits are mutually
exclusive.
In this case, the additional equation is added:
This now increases the number of equality constraints to 3 (including the two original
powerflow equations)
∂g1
∂g1
∂g1 ∂
=g 1
∂x1
(
V2 V1Y12 sin (θ 2 − θ1 − φ21 ) + V3Y13 sin (θ 2 − θ3 − φ23 ) )
∂x1 ∂x2 ∂x3
∂g ∂g 2 ∂g 2 ∂g 2 ∂g1
∂x = ∂x =−V2V3Y32 sin (θ 2 − θ3 − φ23 )
1 ∂x2 ∂x3 ∂x2
∂g3 ∂g3 ∂g3 ∂g
∂x3 ∂x = 0
1
∂x1 ∂x2
3
∂g 2
=−V3V2Y23 sin (θ3 − θ 2 − φ32 )
∂x1
∂g 2
= V3V1Y13 sin (θ3 − θ1 − φ31 ) + V2Y23 sin (θ3 − θ 2 − φ32 )
∂x2
∂g 2
=1
∂x3
∂g3
=−V2V3Y23 sin (θ 2 − θ3 − φ23 )
∂x1
∂g3
= V2V3Y23 sin (θ 2 − θ3 − φ23 )
∂x2
∂g3
=0
∂x3
1
∂g ∂f
∂u = 0 ; ∂u = [1 + 0.025 PG 2 ]
0
∂f ∂C ∂PG1 ∂C ∂PG 3
∂x +
∂
G1
P ∂ x ∂
G3
P ∂x
V1V2Y12 sin (θ1 − θ 2 − φ12 )
= (1 + 0.125 PG1 ) V1V3Y13 sin (θ1 − θ3 − φ13 )
0
P1 141.3
P = 490.3 MW with cost: $7,785/MWhr
2
P3 321.2
The state estimator is designed to give the best estimates of the voltages and phase
angles minimizing the effects of the measurement errors.
An unobservable system is one in which the set of measurements do not span the
entire state space, and therefore not all states can be estimated.
Estimate the power system states and
use the chi-square test of inequality
with α = 0.01 to check for the presence
of bad data in the measurements.
m
∑ w z − h ( x )
2
Recall: Minimize e 2
= e ⋅e =
T
i i i
i =1
x1 θ 2 1
What is x? x = θ 0.010
2 3
x3 V3 1
0.050
1
What is W? W = R −1 =
0.075
1
0.050
1
0.075
What is h(x)?
V3 : h1 ( x) = x3
P13=
: h2 ( x) (
V1 x3Y13 cos ( − x2 − φ13 ) − V12Y13 cos φ13 )
Q21=
: h3 ( x) (V V Y
2 1 21 sin ( x1 − φ21 ) + V2 Y21 sin φ21
2
)
=
P3 : h4 ( x) ( 3 1 31 ( 2 31 ) 3 2 32 ( 2 1 32 ) 3 Y33 cos φ33
x V Y cos x − φ + x V Y cos x − x − φ + x 2
)
Q2=
: h5 ( x) (V2V1Y21 sin ( x1 − φ21 ) − V22Y22 sin φ22 + V2 x3Y23 sin ( x1 − x2 − φ23 ) )
) H xT R −1 [ z − h( x=
Set up equations: F ( x= )] 0
What is Hx ?
0 0 1
0 V x Y
1 3 13 sin ( − x 2 − φ 13 ) V Y
1 13 cos ( − x2 − φ13 )
V1V2Y21 cos ( x1 − φ21 ) 0 0
x3V2Y32 sin ( x2 − x1 − φ32 ) − x3V1Y31 sin ( x2 − φ31 ) V1Y31 cos ( x2 − φ31 )
− x3V2Y32 sin ( x2 −x1−φ32 ) +V2Y32 cos ( x2 −x1−φ32 )
+2 x3Y33 cos φ33
V V Y cos ( x − φ ) −V x Y cos ( x − x −φ ) V Y sin ( x − x − φ )
1 2 21 1 21 2 3 23 1 2 23 2 23 1 2 23
+V2 x3Y23 cos ( x1−x2 −φ23 )
The Newton-Raphson iteration to solve for the set of states x that minimize
the weighted errors is:
( ) ( )
H xT x k R −1 H x x k x=
k −1
− x k
H T
x x ( )
k
R −1
z − h ( x k
)
This value is less than χ2,0.01 = 9.21; therefore, the data set is good and does
not contain any spurious measurements.