Optimal Control Theory
Optimal Control Theory
CONTENTS
The Hamiltonian & the Pontryagin’s Maximum Principle 338
Constraints in the Endpoints 343
The Current-Value Hamiltonian 344
𝑇
Maximize 𝑈 = ∫0 𝑓[𝑥(𝑡), 𝑦(𝑡), 𝑡] 𝑑𝑡
subject to 𝑥′ = 𝑔[𝑥(𝑡), 𝑦(𝑡), 𝑡] (1)
𝑥(0) = 𝑥0 𝑥(𝑇) = 𝑥𝑇
where U is the functional to be optimized; y(t) the control variable selected or
controlled to optimize U; x(t) the state variable which varies over time consistent
with the differential equation set equal to x’ in the constraint; and t time.
T H E H A M I LT O N IA N & T H E P O NT R Y A G I N ’ S
M A X IM U M P R I NC I P LE
The Hamiltonian function, similar to the Lagrangian function, is a technique
that combines the functional (being optimized) with the state variable
(describing the constraint or constraints) into a single equation.
In line with (1), the Hamiltonian is
𝐻[𝑥(𝑡), 𝑦(𝑡), 𝜆(𝑡), 𝑡] = 𝑓[𝑥(𝑡), 𝑦(𝑡), 𝑡] + 𝜆(𝑡)𝑔[𝑥(𝑡), 𝑦(𝑡), 𝑡] (2)
where H denotes the Hamiltonian, 𝜆(t) is the costate variable (similar to the
Lagrangian multiplier) which represents the marginal value or shadow price of
the state variable x(t).
A linear function is both concave and convex, but neither strictly concave
nor strictly convex. For a nonlinear function, the discriminant can test for
joint concavity.
𝑓𝑥𝑥 𝑓𝑥𝑦
|𝐷| = | |
𝑓𝑦𝑥 𝑓𝑦𝑦
Chapter 25 | Opimal Control Theory 339