1_intro
1_intro
Mathematical optimization
Convex optimization
minimize f0 (x)
subject to fi (x) ≤ 0, i = 1, . . . , m
gi (x) = 0, i = 1, . . . , p
▶ generally, no
▶ but you can try to solve it approximately, and it often doesn’t matter
Mathematical optimization
Convex optimization
minimize f0 (x)
subject to fi (x) ≤ 0, i = 1, . . . , m
Ax = b
▶ variable x ∈ Rn
▶ equality constraints are linear
▶ f0 , . . . , fm are convex: for 𝜃 ∈ [0, 1] ,
▶ classical view:
– linear (zero curvature) is easy
– nonlinear (nonzero curvature) is hard
▶ gets close to the basic idea: say what you want, not how to get it
▶ variable is x x = cp.Variable(n)
▶ A, b given obj = cp.norm2(A @ x - b)**2
constr = [x >= 0]
▶ x ⪰ 0 means x1 ≥ 0, . . . , xn ≥ 0 prob = cp.Problem(cp.Minimize(obj), constr)
prob.solve()
▶ algorithms
– 1947: simplex algorithm for linear programming (Dantzig)
– 1960s: early interior-point methods (Fiacco & McCormick, Dikin, . . . )
– 1970s: ellipsoid method and other subgradient methods
– 1980s & 90s: interior-point methods (Karmarkar, Nesterov & Nemirovski)
– since 2000s: many methods for large-scale convex optimization
▶ applications
– before 1990: mostly in operations research, a few in engineering
– since 1990: many applications in engineering (control, signal processing, communications,
circuit design, . . . )
– since 2000s: machine learning and statistics, finance