0% found this document useful (0 votes)
6 views

SPL 09 Introduction To Optimal Control

Uploaded by

miftah ahsan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

SPL 09 Introduction To Optimal Control

Uploaded by

miftah ahsan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

Linear System Control

Introduction to Optimal Control

Unggul Wasiwitono

Mechanical Engineering Department


Faculty of Industrial Technology and Systems Engineering
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

Outline

1 What is Optimal Control?

2 Calculus of Variations

3 Linear Quadratic Regulator

4 MATLAB for Control Law Design and Evaluation

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


2 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

What is Optimal Control?

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


3 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

What is Optimal Control?

1 The design of a controller for the control of a process consists of providing a control signal, the
input, which will make the plant behave in a desired fashion.
2 In classical control the performance specifications are given in terms of desired time domain
and frequency domain measures, such as
1 step response specifications (overshoot, rise time, settling time),
2 frequency response specifications (bandwidth, resonance frequency, resonance damping) and
3 relative stability in terms of phase and gain margins.

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


4 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

What is Optimal Control?

1 The design of a controller for the control of a process consists of providing a control signal, the
input, which will make the plant behave in a desired fashion.
2 In classical control the performance specifications are given in terms of desired time domain
and frequency domain measures, such as
1 step response specifications (overshoot, rise time, settling time),
2 frequency response specifications (bandwidth, resonance frequency, resonance damping) and
3 relative stability in terms of phase and gain margins.
3 The above performance specifications are conflicting in the sense that they lead to
impossible or conflicting requirements for the controller parameters.

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


4 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

What is Optimal Control?

1 The design of a controller for the control of a process consists of providing a control signal, the
input, which will make the plant behave in a desired fashion.
2 In classical control the performance specifications are given in terms of desired time domain
and frequency domain measures, such as
1 step response specifications (overshoot, rise time, settling time),
2 frequency response specifications (bandwidth, resonance frequency, resonance damping) and
3 relative stability in terms of phase and gain margins.
3 The above performance specifications are conflicting in the sense that they lead to
impossible or conflicting requirements for the controller parameters.
4 Optimal control is an approach to control systems design that seeks the best possible control
with respect to a performance metric.
5 State feedback control methods we discussed do not consider constraints on states, control
inputs, etc.

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


4 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

The General Optimal Control Problem


Given system dynamics

ẋ (t) = f (x (t) , u (t)) x ∈ Rn , u ∈ Rm (1)

the state vector has the value x0 at the initial time, i.e.

x (t0 ) = x0

Find a control u (t) , t ∈ [t0 , T ] that steers the system from an initial state x0 to a target set
and minimizes the cost
Z T
J (u) = M (x (T ) , T ) + L (x (t) , u (t)) dt (2)
t0

J (u) is called a performance index

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


5 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

A Typical Performance Index

1 A typical regulator problem involves forcing the plant


to stay at a stationary point, ro
2 An optimal control problem can be formulated by
stating that the best controller is one that minimizes
the shaded area
3 The shaded area is a time integral of the form:
Z ∞
|y (t) − ro | dt
0

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


6 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

Calculus of Variations

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


7 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

The Basis of Optimal Control – Calculus of Variations

1 The calculus of variations is a general method for optimization of functions or functionals


(functions of functions).

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


8 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

The Basis of Optimal Control – Calculus of Variations

1 The calculus of variations is a general method for optimization of functions or functionals


(functions of functions).
2 The brachistochrone problem
1 Perhaps the first problem in the calculus of
variations that is formulated by J. Bernoulli in 1696
2 What is the shape of the wire in order that the
bead, when released from rest at point A, slides to
B in minimum time?
3 Here we required to minimise:
Z B Z B
ds
dt =
A A dv

where s is the arc-length along the wire and v is the instantaneous speed of the bead.

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


8 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

The brachistochrone problem


4 Total energy is conserved, so, if m is the mass of
the bead,
1
mv 2 = mgy ⇒ v = 2gy
p
2
5 Noting that
s  2
dy
q p
2 2
ds = (dx) + (dy) = dx 1+ = dx 1 + y 02
dx

6 we have to minimise:
 12
a
1 + y 02
Z 
1
J (y) = √ dx
2g 0 y
with y (0) = 0 and y (a) = b

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


9 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

The brachistochrone problem

a  12
1 + y 02
Z 
1
J (y) = √ dx
2g 0 y

7 For each curve y(x) joining A and B, J[y] has a


numerical value for the time taken.
8 Thus J acts on a set of functions to produce a
corresponding set of numbers.
9 The integrals like J[y] above are called
functionals.

Calculus of variations deals with optimisation problems of the type described above.

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


10 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

A Basic Problem in the Calculus of Variations


1 A scalar integral which is a function of the time dependent vector x (t), its time derivative and
the time is given Z t1
J (x) = F (x (t) , ẋ (t) , t) dt
t0

The task is to determine that specific value of the vector x (t) which minimizes J in time.
2 The function F is usually called a loss function and J is called an optimization index or a
performance index.
3 In order to find the required time functions it is necessary to know the boundary conditions for
these functions.
4 Usually the initial condition is known or specified, x (t0 ) = x0 , and At the final time there are
several possibilities:
1 t1 is fixed and and x (t1 ) is fixed
2 t1 is fixed and and x (t1 ) is free
3 t1 is free and and x (t1 ) is fixed
4 x (t) must satisfy certain constraint conditions at the final time.

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


11 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

The Fixed-end-point Problem

1 Consider functions x(t), where t is the


independent variable and x the
dependent, so that x(t) defines the
equation of a curve.
2 The problem considered here is to find,
among all curves joining two fixed points
(t0 , x0 ) and (t1 , x1 ), the equation of the
curve minimising a given functional. Thus
the problem is
Z t1
min J (x) = f (x, ẋ, t) dt (3)
t0

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


12 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

The Fixed-end-point Problem

3 If x = x∗ (t) is a minimising curve, then we


have that
J (x) ≥ J (x∗ )
for all other curves y(t) satisfying the end
constraints.
4 Define x (t) = x∗ (t) + δx (t), where δ is a
small quantity independent of x∗ , x and t
5 The difference ∆J = J (x) − J (x∗ ) is known
as the variation of J
Z t1 Z t1
∆J = f (x∗ + δx, ẋ∗ + δ ẋ, t) dt− f (x, ẋ, t) dt
t0 t0

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


13 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

The Fixed-end-point Problem

6 At each t ∈ [t0 t1 ] expand the first integrand as a Taylor series


Z t1   Z t1
∂f ∂f
∆J = f (x∗ , ẋ∗ , t) + δx + δ ẋ + · · · dt − f (x∗ , ẋ∗ , t) dt
t0 ∂x ∂ ẋ t0
Z t1  
∂f ∂f
= δx + δ ẋ + · · · dt = δJ + · · ·
t0 ∂x ∂ ẋ

where δJ is called the first variation of J,


Z t1  
∂f ∂f
δJ = δx (t) + δ ẋ (t) dt (4)
t0 ∂x ∂ ẋ

The last term of this equation is simplified using integration by parts


Z t1 Z t1  
∂f ∂f ∂f d ∂f
δ ẋ (t) dt = δx (t)|t1 − δx (t)|t0 − δx (t) dt (5)
t0 ∂ ẋ ∂ ẋ ∂ ẋ t0 dt ∂ ẋ

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


14 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

The Fixed-end-point Problem

7 At the initial time t = t0 the variation is zero δx (t)|t0 = δx (t0 ) = 0


Z t1   
∂f d ∂f ∂f
δJ = δx (t) − δx (t) dt + δx (t)|t1
t0 ∂x dt ∂ ẋ ∂ ẋ

because for the optimal solution, x (t) = x∗ (t), the variation δJ must be zero.

∂f (x∗ , ẋ∗ , t) d ∂f (x∗ , ẋ∗ , t)


 
− =0 (6)
∂x dt ∂ ẋ

and
∂f (x∗ , ẋ∗ , t)
=0 (7)
∂ ẋ
Equation (6) is called the Euler-Lagrange equation and the boundary condition (7) is called a
natural boundary condition.

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


15 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

A Simplified Optimal Control Problem

1 The Euler-Lagrange equation is a very general result which forms an excellent basis for optimal
control.
2 However, two important modifications are necessary.
1 Constraints to be met by the optimal x(t) must be introduced.
2 A control input must be added.
3 Consider the simplified optimal control problem, as follows
1 The system dynamics are assumed to be time-invariant (no explicit dependence of f on t)
2 The terminal time T will be assumed fixed
3 Initial consitions: x (0) = x0 fixed
Performace index: J (u) = M (x (T )) + T L (x (t) , u (t)) dt
R
4
0
5 Problem: Find u∗ (t) , 0 ≤ t ≤ T which minimizes J (u)

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


16 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

A Simplified Optimal Control Problem


1 First suppose that u∗ (t) , 0 ≤ t ≤ T is the optimal solution and let x∗ (t) be the corresponding
(optimal) state trajectory.
2 Next, consider the variation u∗ + δu (t) of u∗ (t), resulting in a state-trajectory x∗ + δx (t)
3 Now consider variation ∆J
Z T Z T
∆J = M (x∗ (T ) + δx (T )) + L (x∗ + δx, u∗ + δu) dt − M (x∗ (T )) − L (x∗ , u∗ ) dt
0 0

4 Expanding using Taylor series:


Z T  
∗ ∂M (T )
T ∗ ∗ T ∂L T ∂L
∆J =M (x (T )) + (δx (T )) + L (x , u ) + (δx) + (δu) + · · · dt
∂x 0 ∂x ∂u
Z T
− M (x∗ (T )) + L (x∗ , u∗ ) dt
0

where all partial derivatives are evaluated on the optimal trajectory (x∗ (t) , u∗ (t))

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


17 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

A Simplified Optimal Control Problem

5 Ignoring higher order terms gives the first variation of performance index as:
Z T 
∂M ∂L ∂L
δJ = (δx)T + (δx)T + (δu)T dt
∂x T 0 ∂x ∂u

which may be written out in full as:


 
n Z T n m
X ∂M (T ) X ∂L X ∂L 
δJ = δxi (T ) + δxi + δuj dt
i=1
∂xi 0 
i=1
∂xi j=1
∂uj 

Note: that as u∗ (t) is optimal, δJ = 0.

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


18 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

A Simplified Optimal Control Problem

5 Ignoring higher order terms gives the first variation of performance index as:
Z T 
∂M ∂L ∂L
δJ = (δx)T + (δx)T + (δu)T dt
∂x T 0 ∂x ∂u

which may be written out in full as:


 
n Z T n m
X ∂M (T ) X ∂L X ∂L 
δJ = δxi (T ) + δxi + δuj dt
i=1
∂xi 0 
i=1
∂xi j=1
∂uj 

Note: that as u∗ (t) is optimal, δJ = 0.


Problem: The δxi and δuj are not independent increments, since they are linked via the
dynamic constraints ẋ = f (x, u).

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


18 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

A Simplified Optimal Control Problem

1 The constraints are conveniently handled by introduction of Lagrange multipliers.


2 We need n scalar Lagrange variables, pi (t), one for each dynamic constraint ẋi = fi (x, u).
3 Consider the n integrals
Z T
Φi = pi (t) (fi (x, u) − ẋi (t)) dt i = 1, 2, . . . , n
0

evaluated along the optimal trajectory (x∗ (t) , u∗ (t))


Z T    Z T
d
∆Φi = pi (t) fi (x + δx, u + δu) − ẋi (t) + (δxi ) dt − pi (t) (fi (x, u) − ẋi (t)) dt
0 dt 0

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


19 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

A Simplified Optimal Control Problem

4 Expanding
" n m
#
Z T X ∂fi X ∂fi d
∆Φi = pi (t) fi (x, u) + δxi + δuj − ẋi (t) − (δxi ) + · · · dt
0 i=1
∂xi i=1
∂uj dt
Z T
− pi (t) (fi (x, u) − ẋi (t)) dt
0

or
" n m
#
Z T X ∂fi X ∂fi d
δΦi = pi (t) δxi + δuj − (δxi ) dt
0 i=1
∂xi i=1
∂uj dt

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


20 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

A Simplified Optimal Control Problem

5 Integrating the last term by parts


Z T Z T Z T
d
pi (t) (δxi ) = |pi (t) δxi (t)|T0 − ṗi (t) δxi (t) dt = pi (T ) δxi (T ) − ṗi (t) δxi (t) dt
0 dt 0 0

hence
" n m
#
Z T X ∂fi Z T
X ∂fi
δΦi = pi (t) δxi + δuj dt + ṗi (t) δxi (t) dt − pi (T ) δxi (T )
0 i=1
∂xi i=1
∂uj 0

n n n
! m n
!
Z T Z T
X X X ∂fi X X ∂fi
δΦi = δxi pi (t) dt + δuj pi (t) dt
i=1 0 i=1 i=1
∂x i 0 j=1 i=1
∂u j
Z T
+ (δxi (t))T ṗi (t) dt − (δxi (T ))T pi (T )
0

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


21 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

A Simplified Optimal Control Problem

6 Thus
n n n
!
Z T
X ∂M
T
X ∂L X ∂fi
δJ + δΦi = (δx) + δxi + pi (t) dt
i=1
∂x T 0 i=1 ∂xi i=1
∂xi
Z TX m n
! Z T
∂L X ∂fi
+ δuj + pi (t) dt + (δxi (t))T ṗi (t) dt − (δxi (T ))T pi (T )
0 j=1
∂uj i=1
∂u j 0

7 A necessary condition for u∗ (t) to be optimal is that


n
X
δJ + δΦi = 0
i=1

8 The expression for the variation can be simplified considerably by introducing the
Hamiltonean function
n
X
H (x, u) = L (x, u) + pT f (x, u) = L (x, u) + pi (t) fi (x, u)
i=1
U. Wasiwitono [Linear System Control] - Introduction to Optimal Control
22 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

A Simplified Optimal Control Problem

9 Note that
n
∂H (x, u) ∂L X ∂fi
= + pi (t)
∂xi ∂xi i=1
∂x i

and
n
∂H (x, u) ∂L X ∂fi
= + pi (t)
∂uj ∂uj i=1
∂u j

With this substitution we get that


  Z T    
∂M ∂H ∂H
(δx)T − p (t) + (δx)T + ṗ (t) + (δu)T dt = 0
∂x T 0 ∂x ∂u

is a necessary condition for optimality.

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


23 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

A Simplified Optimal Control Problem

10 Now, we have the Lagrange variables at our disposal and we make the choice:

∂H ∂M (T )
ṗ (t) = − p (T ) =
∂x ∂x
which form the “co-state” of the system. With this choice we need
Z T
∂H
δJ = (δu)T dt = 0
0 ∂u

for a minimum, or equivalently since δu is arbitrary, we have that


∂H
=0
∂u
along the optimal trajectory, i.e. the Hamiltonean is an extremum at every point of the
optimal trajectory.

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


24 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

A Simplified Optimal Control Problem

To find the optimal solution to the problem:


1 We define the Hamiltonean H (x, u, p) = L (x, u) + pT f (x, u) and set ∂H
∂u
=0
2 To find the optimal control we need to solve this condition simultaneously with:
∂H
ẋ = = f (x, u) , x (0) = x0 (state equations)
∂p
and
∂H ∂M (T )
ṗ = − , p (T ) = (co-state equations)
∂x ∂x
Note:
1 the state equations need to be integrated forward in time (from initial condition x(0) = x0 )
∂M (T )
2 the co-state equations backwards in time (from terminal condition p (T ) = ∂x
.

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


25 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

Linear Quadratic Regulator

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


26 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

Linear Quadratic Regulator

The shaded area


Z ∞
|r (t) − y (t)| dt
0

From a computational
point of view it is easier to
use a quadratic perfor-
mance index of the form
Z ∞
2
(r (t) − y (t)) dt
0

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


27 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

Linear Quadratic Regulator

1 The specific problem of determining the optimal control of a linear time-invariant (LTI) system
with a quadratic performance index.
2 The plant dynamics are described by the standard state-space equations:

ẋ (t) = Ax (t) + Bu (t) , x (0) = x0

where A ∈ Rn×n and B ∈ Rn×m


3 The performance index to be minimised is:
Z T
1 T 1  
J (u) = x (t) Sx (t) + xT Qx + uT Ru dt
2 2 0

where T is fixed and S, Q ≥ 0, R ≥ 0 are all symmetric constant matrices.

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


28 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

Linear Quadratic Regulator

4 First, we form the Hamiltonean:


1 T 
H (x, u) = x Qx + uT Ru + pT (Ax + Bu)
2
5 To obtain the minimising solution we need to solve:
∂H
= 0 ⇒ Ru + B T p = 0 ⇒ u = −R−1 B T p
∂u
∂H
ṗ = − = −Qx − AT p
∂x
∂M (T )
p (T ) = = Sx (T )
∂x
together with the dynamic equations.

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


29 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

Linear Quadratic Regulator

6 Assembled together in matrix form as:

−BR−1 B T
    
ẋ A x
=
ṗ −Q −AT p

subject to the initial conditions: x (0) = x0 and p (T ) = Sx (T ).


7 The matrix defining the dynamic map has a special structure and is called a “Hamiltonean”
matrix;
8 The overall dynamic map defines a “Hamiltonean system”.
9 To solve the problem, assume that we have p (t) = P (t) x (t) for some unknown matrix function
P (t), which satisfies P (T ) = S.
10 If we can find such a P (t), then the assumption is valid.

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


30 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

Linear Quadratic Regulator


1 Differentiating the co-state equations we get:
 
ṗ = Ṗ x + P ẋ = Ṗ x + P (Ax + Bu) = Ṗ x + P Ax − BR−1 B T p

= Ṗ x + P Ax − P BR−1 B T p = Ṗ x + P Ax − P BR−1 B T P x

Using the co-state equation ṗ = −Qx − AT p = −Qx − AT P x gives


 
−Ṗ x = AT P + P A − P BR−1 B T P + Q x

for t ∈ [0 T ]
2 Since this must hold for all state-trajectories given any x(0) = x0 , it is necessary that:

−Ṗ = AT P + P A − P BR−1 B T P + Q

for t ∈ [0 T ]
3 This is a differential matrix equation (Riccati equation)
U. Wasiwitono [Linear System Control] - Introduction to Optimal Control
31 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

Linear Quadratic Regulator

4 The solution P (t) to the differential Riccati equation is unique


5 Note that the solution to the problem is obtained in state-feedback form

u∗ (t) = −K (t) x (t)

where K (t) = R−1 B T P (t)

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


32 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

Linear Quadratic Regulator

4 The solution P (t) to the differential Riccati equation is unique


5 Note that the solution to the problem is obtained in state-feedback form

u∗ (t) = −K (t) x (t)

where K (t) = R−1 B T P (t)

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


32 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


33 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

MATLAB for Control Law Design and Evaluation

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


34 / 37
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

MATLAB for Control Law Design and Evaluation

[K,S,e] = LQR(A,B,Q,R,N) calculates the optimal gain matrix K, for


continuous-time models with dynamics ẋ = Ax + Bu
minimizes the quadratic cost function
Z ∞
xT Qx + uT Ru + 2xT N u dt

J (u) =
0

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


35 / 37
Question ?
What is Optimal Control? Calculus of Variations Linear Quadratic Regulator MATLAB for Control Law Design and Evaluation

U. Wasiwitono [Linear System Control] - Introduction to Optimal Control


37 / 37

You might also like