06_mpec_active_set
06_mpec_active_set
Armin Nurkanović
3 Active-set methods
4 Numerical benchmarks
2
MPEC f (w)
1.5 +
min f (w) (1a) w$
w∈Rn
s.t. g(w) = 0, (1b) 1
h(w) ≥ 0, (1c)
w2
0.5
0 ≤ w1 ⊥ w2 ≥ 0, (1d)
0
-0.5
w = (w0 , w1 , w2 ) ∈ Rn , w0 ∈ Rp , w1 , w2 ∈ Rm ,
-1
Ω = {x ∈ Rn | g(w) = 0, h(w) ≥ 0, 0 ≤ w1 ⊥ w2 ≥ 0}, -1 0 1 2
w1
▶ Standard NLP methods solve the KKT conditions.
▶ MPECs violate constraint qualifications, and the KKT conditions may not be necessary.
▶ There are many stationary concepts for MPECs, and not all are useful.
2
MPEC f (w)
1.5 +
min f (w) (1a) w$
w∈Rn
s.t. g(w) = 0, (1b) 1
h(w) ≥ 0, (1c)
w2
0.5
0 ≤ w1 ⊥ w2 ≥ 0, (1d)
0
-0.5
w = (w0 , w1 , w2 ) ∈ Rn , w0 ∈ Rp , w1 , w2 ∈ Rm ,
-1
Ω = {x ∈ Rn | g(w) = 0, h(w) ≥ 0, 0 ≤ w1 ⊥ w2 ≥ 0}, -1 0 1 2
w1
▶ Standard NLP methods solve the KKT conditions.
▶ MPECs violate constraint qualifications, and the KKT conditions may not be necessary.
▶ There are many stationary concepts for MPECs, and not all are useful.
▶ Workaround/main idea: solve a (finite) sequence of more regular problems.
6. Numerical methods for mathematical programs with complementarity constraints
A. Nurkanović 2/37
Two families of MPCC methods
github.com/nosnoc/nosnoc
To get started see: github.com/nosnoc/nosnoc/tree/main/examples/generic_mpcc
3 Active-set methods
4 Numerical benchmarks
Reg(σ k )
min f (w)
w∈Rn
s.t. g(w) = 0,
w2
h(w) ≥ 0,
w1 , w2 ≥ 0,
w1,i w2,i ≤ σ k , i = 1, . . . , m.
w1
Reg(σ k )
min f (w)
w∈Rn
s.t. g(w) = 0,
w2
h(w) ≥ 0,
w1 , w2 ≥ 0,
w1,i w2,i ≤ σ k , i = 1, . . . , m.
w1
Reg(σ k )
min f (w)
w∈Rn
s.t. g(w) = 0,
w2
h(w) ≥ 0,
w1 , w2 ≥ 0,
w1,i w2,i ≤ σ k , i = 1, . . . , m.
w1
Reg(σ k )
min f (w)
w∈Rn
s.t. g(w) = 0,
w2
h(w) ≥ 0,
w1 , w2 ≥ 0,
w1,i w2,i ≤ σ k , i = 1, . . . , m.
w1
5
Consider the two-dimensional MPEC:
4
min (w1 − 1)2 + (w2 − 1)2
w∈R2
s.t. 0 ≤ w1 ⊥ w2 ≥ 0. 3
w2
2
∗
▶ The origin w = 0 is an C-stationary point with the
1
optimal multipliers ν = −2, ξ = 2.
S-stationary
0
C-stationary
-1
0 2 4
w1
5
Consider the two-dimensional MPEC:
4
min (w1 − 1)2 + (w2 − 1)2
w∈R2
s.t. 0 ≤ w1 ⊥ w2 ≥ 0. 3
w2
2
∗
▶ The origin w = 0 is an C-stationary point with the
1
optimal multipliers ν = −2, ξ = 2.
S-stationary
▶ There exists two descent direction d = (1, 0) and 0
d = (0, 1) C-stationary
▶ The origin is not B-stationary, w = (1, 0) and -1
0 2 4
w = (0, 1) are S-stationary.
w1
w2
1
0
▶ Relaxation method converges in one iteration to S-stationary
w∗ = (0, 0). -1
-2
-2 0 2 4
w1
w2
1
0
▶ Relaxation method converges in one iteration to S-stationary
w∗ = (0, 0). -1
▶ Smoothing needs an infinite sequence for
-2
convergence. -2 0 2 4
w1
w2
1
wk
0
▶ Relaxation method converges in one iteration to S-stationary
w∗ = (0, 0). -1
▶ Smoothing needs an infinite sequence for
-2
convergence. -2 0 2 4
w1
w2
1
wk
0
▶ Relaxation method converges in one iteration to S-stationary
w∗ = (0, 0). -1
▶ Smoothing needs an infinite sequence for
-2
convergence. -2 0 2 4
w1
w2
1
wk
0
▶ Relaxation method converges in one iteration to S-stationary
w∗ = (0, 0). -1
▶ Smoothing needs an infinite sequence for
convergence. -2
-2 0 2 4
w1
(a) Scholtes (b) Lin-Fukushima (c) Ste,ensen-Ulbrich (d) Kadrani et al. (e) Kanzow-Schwartz
w2
w2
w2
w2
w2
w1 w1 w1 w1 w1
▶ They have better convergence properties than Scholtes’ method if the NLP’s are solved
exactly.
▶ In practice, they perform better only on easier problems [Nurkanović et al., 2024].
:
(a) Scholtes (b) Lin-Fukushima (c) Ste,ensen-Ulbrich (d) Kadrani et al. (e) Kanzow-Schwartz
w2
w2
w2
w2
w2
w1 w1 w1 w1 w1
▶ They have better convergence properties than Scholtes’ method if the NLP’s are solved
exactly.
▶ In practice, they perform better only on easier problems [Nurkanović et al., 2024].
Convergence results [Hoheisel et al., 2013]:
a) [Scholtes, 2001]: C-stationarity under MPEC-MFCQ.
(a) Scholtes (b) Lin-Fukushima (c) Ste,ensen-Ulbrich (d) Kadrani et al. (e) Kanzow-Schwartz
w2
w2
w2
w2
w2
w1 w1 w1 w1 w1
▶ They have better convergence properties than Scholtes’ method if the NLP’s are solved
exactly.
▶ In practice, they perform better only on easier problems [Nurkanović et al., 2024].
Convergence results [Hoheisel et al., 2013]:
a) [Scholtes, 2001]: C-stationarity under MPEC-MFCQ.
b) [Lin and Fukushima, 2005]: C-stationarity under MPEC-MFCQ..
(a) Scholtes (b) Lin-Fukushima (c) Ste,ensen-Ulbrich (d) Kadrani et al. (e) Kanzow-Schwartz
w2
w2
w2
w2
w2
w1 w1 w1 w1 w1
▶ They have better convergence properties than Scholtes’ method if the NLP’s are solved
exactly.
▶ In practice, they perform better only on easier problems [Nurkanović et al., 2024].
Convergence results [Hoheisel et al., 2013]:
a) [Scholtes, 2001]: C-stationarity under MPEC-MFCQ.
b) [Lin and Fukushima, 2005]: C-stationarity under MPEC-MFCQ..
c) [Steffensen and Ulbrich, 2010]: C-stationarity under MPEC-CPLD.
(a) Scholtes (b) Lin-Fukushima (c) Ste,ensen-Ulbrich (d) Kadrani et al. (e) Kanzow-Schwartz
w2
w2
w2
w2
w2
w1 w1 w1 w1 w1
▶ They have better convergence properties than Scholtes’ method if the NLP’s are solved
exactly.
▶ In practice, they perform better only on easier problems [Nurkanović et al., 2024].
Convergence results [Hoheisel et al., 2013]:
a) [Scholtes, 2001]: C-stationarity under MPEC-MFCQ.
b) [Lin and Fukushima, 2005]: C-stationarity under MPEC-MFCQ..
c) [Steffensen and Ulbrich, 2010]: C-stationarity under MPEC-CPLD.
d) [Kadrani et al., 2009]: M-stationarity under MPEC-CPLD.
(a) Scholtes (b) Lin-Fukushima (c) Ste,ensen-Ulbrich (d) Kadrani et al. (e) Kanzow-Schwartz
w2
w2
w2
w2
w2
w1 w1 w1 w1 w1
▶ They have better convergence properties than Scholtes’ method if the NLP’s are solved
exactly.
▶ In practice, they perform better only on easier problems [Nurkanović et al., 2024].
Convergence results [Hoheisel et al., 2013]:
a) [Scholtes, 2001]: C-stationarity under MPEC-MFCQ.
b) [Lin and Fukushima, 2005]: C-stationarity under MPEC-MFCQ..
c) [Steffensen and Ulbrich, 2010]: C-stationarity under MPEC-CPLD.
d) [Kadrani et al., 2009]: M-stationarity under MPEC-CPLD.
e) [Kanzow and Schwartz, 2013]: M-stationarity under MPEC-CPLD.
6. Numerical methods for mathematical programs with complementarity constraints
A. Nurkanović 8/37
“The price of inexactness” [Kanzow and Schwartz, 2015]
▶ Most convergence results derived under the assumption that every subproblem for a fixed
σ k is solved exactly.
▶ If NLPs are solved to some ϵ > 0 threshold, than most methods have weaker convergence
properties.
▶ Most convergence results derived under the assumption that every subproblem for a fixed
σ k is solved exactly.
▶ If NLPs are solved to some ϵ > 0 threshold, than most methods have weaker convergence
properties.
▶ Steffensen-Ulbrich, Kadrani, Kanzow-Schwartz converge only to weakly stationary points!
▶ Most convergence results derived under the assumption that every subproblem for a fixed
σ k is solved exactly.
▶ If NLPs are solved to some ϵ > 0 threshold, than most methods have weaker convergence
properties.
▶ Steffensen-Ulbrich, Kadrani, Kanzow-Schwartz converge only to weakly stationary points!
▶ Scholtes’ method still converges to C-stationary points.
▶ A nice implementation for Scholtes’ method is in pair it with interior-point methods,
where the barrier τ and homotopy σ parameters are jointly driven to
zero [Raghunathan and Biegler, 2005] (IPOPT-C).
The ℓ∞ reformulation
The ℓ1 reformulation
min f (w) + ρs
minn f (w) + ρw1⊤ w2 w∈Rn ,s∈R
w∈R
s.t. g(w) = 0,
s.t. g(w) = 0,
h(w) ≥ 0,
h(w) ≥ 0,
w1 , w2 ≥ 0,
w1 , w2 ≥ 0.
w1,i w2,i ≤ s, i = 1 . . . m,
0 ≤ s ≤ s̄.
The ℓ∞ reformulation
The ℓ1 reformulation
min f (w) + ρs
minn f (w) + ρw1⊤ w2 w∈Rn ,s∈R
w∈R
s.t. g(w) = 0,
s.t. g(w) = 0,
h(w) ≥ 0,
h(w) ≥ 0,
w1 , w2 ≥ 0,
w1 , w2 ≥ 0.
w1,i w2,i ≤ s, i = 1 . . . m,
0 ≤ s ≤ s̄.
Approach: Solve a sequence of regularized NLPs (σ k ), warm start the next iteration with
w∗ (σ k−1 ). Update the homotopy parameter via:
▶ Linear update rule:
σ k+1 = κσ k , κ ∈ (0, 1)
Approach: Solve a sequence of regularized NLPs (σ k ), warm start the next iteration with
w∗ (σ k−1 ). Update the homotopy parameter via:
▶ Linear update rule:
σ k+1 = κσ k , κ ∈ (0, 1)
3 Active-set methods
4 Numerical benchmarks
Difficulties:
1. Regularization methods, also under very strong assumptions, converge to points that are
weaker than S-stationary, which are possibly not B-stationary.
2. Regularized NLPs, small or large σ, may be extremely difficult to solve (much more than
TNLPs/BNLPs).
Difficulties:
1. Regularization methods, also under very strong assumptions, converge to points that are
weaker than S-stationary, which are possibly not B-stationary.
2. Regularized NLPs, small or large σ, may be extremely difficult to solve (much more than
TNLPs/BNLPs).
▶ Late 1990s, early 2000s: selecting the next TNLP/BNLP based on signs of
multipliers [Fukushima and Tseng, 2002, Giallombardo and Ralph, 2008,
Izmailov and Solodov, 2008, Jiang and Ralph, 1999, Lin and Fukushima, 2006,
Liu et al., 2006, Luo et al., 1996, Scholtes and Stöhr, 1999]. Convergence to
B-stationarity can only be guaranteed under MPEC-LICQ.
▶ 2007: In [Leyffer and Munson, 2007] for the first time suggested to use LPECs as a
stopping criteria and for step computation.
▶ 2022: SQP-type methods with LPECs, developed for MPECs with bound
constraints [Kirches et al., 2022] (B-stationary), extension for general constraints via
augmented Lagrangian [Guo and Deng, 2022] (M-stationary).
▶ 2024-2025: for general MPECs (B-stationary) in [Kazi et al., 2024] and [N. and Leyffer,
2025].
I+0 (w) = {i ∈ {1, . . . , m} | w1,i > 0, w2,i = 0}, D1 (w) ∪ D2 (w) = I00 (w),
D1 (w) ∩ D2 (w) = ∅,
I0+ (w) = {i ∈ {1, . . . , m} | w1,i = 0, w2,i > 0},
I1 (w) := I0+ (w) ∪ D1 (w),
I00 (w) = {i ∈ {1, . . . , m} | w1,i = 0, w2,i = 0}. I2 (w) := I+0 (w) ∪ D2 (w),
I+0 (w) = {i ∈ {1, . . . , m} | w1,i > 0, w2,i = 0}, D1 (w) ∪ D2 (w) = I00 (w),
D1 (w) ∩ D2 (w) = ∅,
I0+ (w) = {i ∈ {1, . . . , m} | w1,i = 0, w2,i > 0},
I1 (w) := I0+ (w) ∪ D1 (w),
I00 (w) = {i ∈ {1, . . . , m} | w1,i = 0, w2,i = 0}. I2 (w) := I+0 (w) ∪ D2 (w),
min f (w)
w∈Rn
w2
s.t. g(w) = 0,
h(w) ≥ 0,
w1,i = 0, w2,i ≥ 0, i ∈ I1 (w∗ )
w1,i ≥ 0, w2,i = 0, i ∈ I2 (w∗ ).
w1
I+0 (w) = {i ∈ {1, . . . , m} | w1,i > 0, w2,i = 0}, D1 (w) ∪ D2 (w) = I00 (w),
D1 (w) ∩ D2 (w) = ∅,
I0+ (w) = {i ∈ {1, . . . , m} | w1,i = 0, w2,i > 0},
I1 (w) := I0+ (w) ∪ D1 (w),
I00 (w) = {i ∈ {1, . . . , m} | w1,i = 0, w2,i = 0}. I2 (w) := I+0 (w) ∪ D2 (w),
min f (w)
w∈Rn
w2
s.t. g(w) = 0,
h(w) ≥ 0,
w1,i = 0, w2,i ≥ 0, i ∈ I1 (w∗ )
w1,i ≥ 0, w2,i = 0, i ∈ I2 (w∗ ).
w1
min ∇f (wk )⊤ d
d∈Rn
w2
w2
w2
w1 w1 w1 w1
w2
w2
w2
w1 w1 w1 w1
Summary:
▶ The TNLP, RNLP, and BNLPs are regular nonlinear optimization problems.
▶ If we know the right TNLP/BNLP, we can just solve a regular NLP to solve the MPEC.
▶ There are 2|I00 | BNLPs - highlighting the combinatorial nature.
LPEC(wk , ρ) - reduced
min ∇f (wk )⊤ d
d∈Rn
LPEC(wk , ρ) - full
min ∇f (wk )⊤ d
d∈Rn
k k
0 ≤ w1,i +d1,i ⊥ w2,i +d2,i ≥ 0, ∀i,
∥d∥∞ ≤ ρ,
LPEC(wk , ρ) - full
BNLP(wk )
k ⊤
min ∇f (w ) d
d∈Rn
min f (x)
s.t. g(wk ) + ∇g(wk )⊤ d = 0, x∈Rn
s.t. g(w) = 0,
h(wk ) + ∇h(wk )⊤ d ≥ 0,
h(w) ≥ 0,
w1,i = 0, w2,i ≥ 0, i ∈ I1k ,
k
0 ≤ w1,i k
+d1,i ⊥ w2,i +d2,i ≥ 0, ∀i, w1,i ≥ 0, w2,i = 0, i ∈ I2k .
∥d∥∞ ≤ ρ,
LPEC(wk , ρ) - full
BNLP(wk )
k ⊤
min ∇f (w ) d
d∈Rn
min f (x)
s.t. g(wk ) + ∇g(wk )⊤ d = 0, x∈Rn
s.t. g(w) = 0,
h(wk ) + ∇h(wk )⊤ d ≥ 0,
h(w) ≥ 0,
w1,i = 0, w2,i ≥ 0, i ∈ I1k ,
k
0 ≤ w1,i k
+d1,i ⊥ w2,i +d2,i ≥ 0, ∀i, w1,i ≥ 0, w2,i = 0, i ∈ I2k .
∥d∥∞ ≤ ρ,
Main steps:
1. Solve LPEC(wk , ρ) to determine the active set for BNLP(wk + d) - or verify
B-stationarity. Don’t use d for iterate update.
LPEC(wk , ρ) - full
BNLP(wk )
k ⊤
min ∇f (w ) d
d∈Rn
min f (x)
s.t. g(wk ) + ∇g(wk )⊤ d = 0, x∈Rn
s.t. g(w) = 0,
h(wk ) + ∇h(wk )⊤ d ≥ 0,
h(w) ≥ 0,
w1,i = 0, w2,i ≥ 0, i ∈ I1k ,
k
0 ≤ w1,i k
+d1,i ⊥ w2,i +d2,i ≥ 0, ∀i, w1,i ≥ 0, w2,i = 0, i ∈ I2k .
∥d∥∞ ≤ ρ,
Main steps:
1. Solve LPEC(wk , ρ) to determine the active set for BNLP(wk + d) - or verify
B-stationarity. Don’t use d for iterate update.
2. Solve BNLP(wk + d) for accuracy: if decrease in objective accept step, else: resolve
LPEC with smaller ρ.
6. Numerical methods for mathematical programs with complementarity constraints
A. Nurkanović 19/37
Full vs reduced LPEC
Lemma
Let w ∈ Ω be a feasible point of the MPEC (1). For all trust region radii that satisfy
k k
0 < ρ < ρ̄ = min{{w1,i | i ∈ I+0 } ∪ {w2,i | i ∈ I0+ }}, (3)
the sets of the local minimizers of the reduced and full LPECs are identical. In the special case
of I00 = {1, . . . , m} the reduced and full LPEC coincide and in that case, ρ̄ = ∞.
Assumption in example: LPEC solved to global optimality (turns out: not so restrictive in
practice).
3
Consider the two-dimensional MPEC:
f (x)
2 2 !rf (7x)
min 4(w1 − 1) + (w2 − 1) d
w∈R2 2
s.t. 0 ≤ w1 ⊥ w2 ≥ 0.
1 x
^
x2
▶ Two B-stationary points w̄ = (1, 0) and ŵ = (0, 1), x
7
with f (w̄) = 1 and f (ŵ) = 4. 0
▶ If ρ is sufficiently small, ŵ with f (ŵ) = 4 is verified.
-1
-1 0 1 2 3
x1
Assumption in example: LPEC solved to global optimality (turns out: not so restrictive in
practice).
x2
▶ Two B-stationary points w̄ = (1, 0) and ŵ = (0, 1),
x
7
with f (w̄) = 1 and f (ŵ) = 4. 0
▶ If ρ is sufficiently small, ŵ with f (ŵ) = 4 is verified.
▶ If ρ large, the globally optimal LPEC solution finds
-1
a BNLP with f (w̄) = 1.
-1 0 1 2 3
x1
Assumption in example: LPEC solved to global optimality (turns out: not so restrictive in
practice).
Consider the two-dimensional MPEC: 3
f (x)
min2 4(w1 − 1)2 + (w2 − 1)2 !rf (7x)
w∈R d
2
s.t. 0 ≤ w1 ⊥ w2 ≥ 0.
1 x
^
x2
▶ Two B-stationary points w̄ = (1, 0) and ŵ = (0, 1),
with f (w̄) = 1 and f (ŵ) = 4. x
7
0
▶ If ρ is sufficiently small, ŵ with f (ŵ) = 4 is verified.
▶ If ρ large, the globally optimal LPEC solution finds
a BNLP with f (w̄) = 1. -1
▶ Conversely, start w̄, the full LPEC finds a BNLP -1 0 1 2 3
with a larger objective, step rejected, ρ reduced. x1
Lemma
Let w∗ ∈ Ω be a S-stationary point of the MPEC (1). For a sufficiently small ρ > 0, a global
minimizer of the relaxed MILP (4) is d = 0, and any y ∈ {0, 1}m such that (4d) and (4e) hold.
Crossover strategy
1. Use regularization or penalty based method with σ k .
2. If ∥diag(w1 )w2 ∥∞ < ρ0 , solve LPEC(w∗ (σ k ), ρ0 ).
3. Solve BNLP(w∗ (σ k ) + d), if successful, feasible point found.
4. If not, reduce σ k and go to 1.
Crossover strategy
1. Use regularization or penalty based method with σ k .
2. If ∥diag(w1 )w2 ∥∞ < ρ0 , solve LPEC(w∗ (σ k ), ρ0 ).
3. Solve BNLP(w∗ (σ k ) + d), if successful, feasible point found.
4. If not, reduce σ k and go to 1.
Crossover strategy
1. Use regularization or penalty based method with σ k .
2. If ∥diag(w1 )w2 ∥∞ < ρ0 , solve LPEC(w∗ (σ k ), ρ0 ).
3. Solve BNLP(w∗ (σ k ) + d), if successful, feasible point found.
4. If not, reduce σ k and go to 1.
x2
1 x$ (= )
▶ In the example, we set a = 1.1.
-1
-1 0 1 2 3 4
x1
x2
1 x$ (= )
▶ In the example, we set a = 1.1.
▶ If σ is not small enough, LPEC selects an infeasible
0
BNLP.
-1
-1 0 1 2 3 4
x1
x2
1
▶ In the example, we set a = 1.1.
x$ (= )
▶ If σ is not small enough, LPEC selects an infeasible
0
BNLP.
▶ For smaller τ LPEC predicts correct BNLP.
-1
-1 0 1 2 3 4
x1
x2
▶ In the example, we set a = 1.1. 1
▶ If σ is not small enough, LPEC selects an infeasible x$ (= )
BNLP. 0
▶ For smaller τ LPEC predicts correct BNLP.
▶ In practice, often for large σ the LPEC finds a -1
feasible BNLP. -1 0 1 2 3 4
x1
x2
▶ In the example, we set a = 1.1. 1
▶ If σ is not small enough, LPEC selects an infeasible x$ (= )
BNLP. 0
▶ For smaller τ LPEC predicts correct BNLP.
▶ In practice, often for large σ the LPEC finds a -1
feasible BNLP. -1 0 1 2 3 4
▶ Moreover, often the solution of this BNLP coincides x1
with the solution of the MPEC.
3 Active-set methods
4 Numerical benchmarks
1 1
0.8 0.8
Fraction solved (Phase I)
0.4 0.4
Reg-LPEC Reg-LPEC
0.2 Reg-Simple Reg-Simple
Pen-`1 -LPEC
0.2
Pen-`1 -LPEC
Feasibility-`1 Feasibility-`1
Feasibility-`1 Feasibility-`1
0 0
10!4 10!2 100 102 104 0 5 10 15
Wall time (s) 2x times best
(a) Absolute timings. (b) Relative timings.
Figure: Evaluating different Phase I algorithms in MPECopt on the MacMPEC test set.
1 1
0.8 0.8
Fraction solved
Fraction solved
0.6 0.6
0.4 0.4
Figure: Evaluating different LPEC algorithms in MPECopt on the MacMPEC test set.
104
Gurobi-MILP
HiGHS-MILP
Reg-MPEC
Maximal LPECs wall time
102 `1 -MPEC
100
10!2
10!4
0 20 40 60 80 100 120 140 160
Problem instance
Figure: Maximal solution times for different LPEC algorithms in MPECopt on the MacMPEC test set.
100 100
100 100
10!2 10!2
Figure: Comparison of total NLP and LPEC computation times on the MacMPEC.
1 1
0.8 0.8
Fraction solved
Fraction solved
0.6 0.6
0.4 0.4
MPECopt-Reg-Gurobi MPECopt-Reg-Gurobi
0.2 MPECopt-Reg-HiGHS MPECopt-Reg-HiGHS
MPECopt-`1 -Gurobi
0.2
MPECopt-`1 -Gurobi
Reg Reg
Pen-`1 Pen-`1
0 0
10!2 100 102 104 0 5 10 15
Wall time (s) 2x times best
(a) Absolute timings. (b) Relative timings.
Figure: Evaluating different MPEC solution methods on the MacMPEC test set in terms of finding a
stationary point.
12 MPECopt-Reg-Gurobi Reg
MPECopt-Reg-HiGHS Pen-`1
MPECopt-`1 -Gurobi
10
NLPs solved
0
0 20 40 60 80 100 120 140 160 180 200
Problem instance
Figure: Number of NLPs (top plot) and NLPs (bottom plots) solved in MacMPEC on all problem
instances.
6
MPECopt-Reg-Gurobi
MPECopt-Reg-HiGHS
5 MPECopt-`1 -Gurobi
Reg
Pen-`1
4
LPECs solved
0
0 20 40 60 80 100 120 140 160 180 200
Problem instance
Figure: Number of NLPs (top plot) and NLPs (bottom plots) solved in MacMPEC on all problem
instances.
MPECopt-Reg-Gurobi
180
not B stationary
160 B stationary
140
120
100
Count
80
60
40
20
0
A C M S W Failure
Stationary point
Figure: Distribution of stationary points on MacMPEC for different solution methods. Failure counts the
number of infeasible problems.
6. Numerical methods for mathematical programs with complementarity constraints
A. Nurkanović 33/37
Distribution of stationary points
MPECopt-`1 -Gurobi
180
not B stationary
160 B stationary
140
120
100
Count
80
60
40
20
0
A C M S W Failure
Stationary point
Figure: Distribution of stationary points on MacMPEC for different solution methods. Failure counts the
number of infeasible problems.
6. Numerical methods for mathematical programs with complementarity constraints
A. Nurkanović 33/37
Distribution of stationary points
Reg
150
not B stationary
B stationary
100
Count
50
0
A C M S W Failure
Stationary point
Figure: Distribution of stationary points on MacMPEC for different solution methods. Failure counts the
number of infeasible problems.
6. Numerical methods for mathematical programs with complementarity constraints
A. Nurkanović 33/37
Distribution of stationary points
Pen-`1
140
not B stationary
B stationary
120
100
80
Count
60
40
20
0
A C M S W Failure
Stationary point
Figure: Distribution of stationary points on MacMPEC for different solution methods. Failure counts the
number of infeasible problems.
6. Numerical methods for mathematical programs with complementarity constraints
A. Nurkanović 33/37
Active set vs regularization methods on random MPECs
1 1
0.8 0.8
Fraction solved
Fraction solved
0.6 0.6
0.4 0.4
MPECopt-Reg-Gurobi MPECopt-Reg-Gurobi
0.2 MPECopt-Reg-HiGHS MPECopt-Reg-HiGHS
MPECopt-`1 -Gurobi
0.2
MPECopt-`1 -Gurobi
Reg Reg
Pen-`1 Pen-`1
0 0
10!2 10!1 100 101 102 103 0 5 10 15
Wall time (s) 2x times best
(a) Absolute timings. (b) Relative timings.
Figure: Evaluating different MPEC solution methods on the synthetic test set in terms of finding a
stationary point.
12 MPECopt-Reg-Gurobi Reg
MPECopt-Reg-HiGHS Pen-`1
MPECopt-`1 -Gurobi
10
NLPs solved
0
0 20 40 60 80 100 120 140 160 180 200
Problem instance
Figure: Number of NLPs (top plot) and NLPs (bottom plots) solved in the synthetic test set on all
problem instances.
3.5
3
LPECs solved
2.5
1.5
1
MPECopt-Reg-Gurobi Reg
0.5 MPECopt-Reg-HiGHS Pen-`1
MPECopt-`1 -Gurobi
0
0 20 40 60 80 100 120 140 160 180 200
Problem instance
Figure: Number of NLPs (top plot) and NLPs (bottom plots) solved in the synthetic test set on all
problem instances.
MPECopt-Reg-Gurobi
200
not B stationary
B stationary
150
Count
100
50
0
A M S W Failure
Stationary point
Figure: Distribution of stationary points on the synthetic test set for different solution methods. Failure
counts the number of infeasible problems.
6. Numerical methods for mathematical programs with complementarity constraints
A. Nurkanović 36/37
Distribution of stationary points
MPECopt-`1 -Gurobi
100
not B stationary
B stationary
80
60
Count
40
20
0
A C S W Failure
Stationary point
Figure: Distribution of stationary points on the synthetic test set for different solution methods. Failure
counts the number of infeasible problems.
6. Numerical methods for mathematical programs with complementarity constraints
A. Nurkanović 36/37
Distribution of stationary points
MPECopt-`1 -Gurobi
200
not B stationary
B stationary
150
Count
100
50
0
A M S W Failure
Stationary point
Figure: Distribution of stationary points on the synthetic test set for different solution methods. Failure
counts the number of infeasible problems.
6. Numerical methods for mathematical programs with complementarity constraints
A. Nurkanović 36/37
Distribution of stationary points
Reg
120
not B stationary
B stationary
100
Count 80
60
40
20
0
A M S W Failure
Stationary point
Figure: Distribution of stationary points on the synthetic test set for different solution methods. Failure
counts the number of infeasible problems.
6. Numerical methods for mathematical programs with complementarity constraints
A. Nurkanović 36/37
Summary
Anitescu, M. (2005a).
Global convergence of an elastic mode approach for a class of mathematical programs with
complementarity constraints.
SIAM Journal on Optimization, 16:120–145.
Anitescu, M. (2005b).
On using the elastic mode in nonlinear programming approaches to mathematical
programs with complementarity constraints.
SIAM Journal on Optimization, 15(4):1203–1236.
Fukushima, M. and Tseng, P. (2002).
An implementable active-set algorithm for computing a b-stationary point of a
mathematical program with linear complementarity constraints.
SIAM Journal on Optimization, 12(3):724–739.