StateSpace 4up
StateSpace 4up
MUS420
Equations of motion for any physical system may be
Introduction to Linear State Space Models
conveniently formulated in terms of its state x(t):
Julius O. Smith III ([email protected])
Center for Computer Research in Music and Acoustics (CCRMA) Input Forces u(t)
Department of Music, Stanford University
Stanford, California 94305 ft
Model State x(t) ẋ(t)
February 5, 2019
R
Outline
3 4
Force-Driven Mass Example Forming Outputs
Consider f = ma for the force-driven mass: Any system output is some function of the state, and
possibly the input (directly):
• Since the mass m is constant, we can use momentum ∆
p(t) = m v(t) in place of velocity (more fundamental, y(t) = ot[x(t), u(t)]
since momentum is conserved )
ot y(t)
• x(t0) and p(t0) (or v(t0)) define the state of the mass
m at time t0
• In the absence of external forces f (t), all future states Input Forces u(t)
are predictable from the state at time t0: ft ẋ(t)
p(t) = p(t0) (conservation of momentum) Model State x(t)
Z
1 t
x(t) = x(t0) + p(τ ) dτ, t ≥ t0 R
m t0
• External forces f (t) drive the state to arbitrary points Usually the output is a linear combination of state
in state space: variables and possibly the current input:
Z t
∆ ∆
p(t) = p(t0) + f (τ ) dτ, t ≥ t0, p(t) = m v(t) y(t) = Cx(t) + Du(t)
t0
Z t where C and D are constant matrices of
1
x(t) = x(t0) + p(τ ) dτ, t ≥ t0 linear-combination coefficients
m t0
5 6
For the simple example of a mass m driven by external Why not include position x(t) as well as velocity v(t) in
force f along the x axis: the state-space model for the force-driven mass?
v(t)
ẋ(t)
0 1
x(t)
0
x=0 = + f (t)
v̇(t) 0 0 v(t) 1/m
f (t) m
We might expect this because we know from before that
the complete physical state of a mass consists of its
velocity v and position x!
• There is only one energy-storage element (the mass),
and it stores energy in the form of kinetic energy
• Therefore, we should choose the state variable to be
velocity v = ẋ (or momentum p = mv = mẋ)
• Newton’s f = ma readily gives the state-space
formulation:
1
v̇ = f
m
or
ṗ = f
• This is a first-order system (no vector needed)
9 10
• Position x does not affect stored energy • State variable = physical amplitude for some
1 energy-storing degree of freedom
Em = m v 2
2 • Mechanical Systems:
• Velocity v(t) is the only energy-storing degree of State variable for each
freedom – ideal spring (linear or rotational)
• Only velocity v(t) is needed as a state variable – point mass (or moment of inertia)
• The initial position x(0) can be kept “on the side” to times the number of dimensions in which it can move
enable computation of the complete state in • RLC Electric Circuits:
position-momentum space: State variable for each capacitor and inductor
Z t
x(t) = x(0) + v(τ ) dτ • In Discrete-Time:
0 State variable for each unit-sample delay
• In other words, the position can be derived from the • Continuous- or Discrete-Time:
velocity history without knowing the force history Dimensionality of state-space = order of the system
• Note that the external force f (t) can only drive v̇(t). (LTI systems)
It cannot drive ẋ(t) directly:
ẋ(t) 0 1 x(t) 0
= + f (t)
v̇(t) 0 0 v(t) 1/m
11 12
Discrete-Time Linear State Space Zero-State Impulse Response
Models (Markov Parameters)
For linear, time-invariant systems, a discrete-time
state-space model looks like a vector first-order Linear State-Space Model:
finite-difference model: y(n) = Cx(n) + Du(n)
x(n + 1) = A x(n) + B u(n) x(n + 1) = Ax(n) + Bu(n)
y(n) = C x(n) + D u(n)
The zero-state impulse response of a state-space model is
where ∆
easily found by direct calculation: Let x(0) = 0 and
• x(n) ∈ RN = state vector at time n u = Ipδ(n) = diag(δ(n), . . . , δ(n)). Then
• u(n) = p × 1 vector of inputs h(0) = Cx(0)B + D Ipδ(0) = D
• y(n) = q × 1 output vector x(1) = A x(0) + B Ipδ(0) = B
• A = N × N state transition matrix h(1) = CB
• B = N × p input coefficient matrix x(2) = A x(1) + B δ(1) = AB
h(2) = CAB
• C = q × N output coefficient matrix
x(3) = A x(1) + B δ(1) = A2B
• D = q × p direct path coefficient matrix
h(3) = CA2B
The state-space representation is especially powerful for ..
• multi-input, multi-output (MIMO) linear systems h(n) = CAn−1B , n>0
• time-varying linear systems
(every matrix can have a time subscript n)
13 14
Thus, the “impulse response” of the state-space model • Recall the linear state-space model:
can be summarized as
( y(n) = C x(n) + D u(n)
D, n=0
h(n) = n−1 x(n + 1) = A x(n) + B u(n)
CA B, n > 0
and its “impulse response”
• Initial state x(0) assumed to be 0 (
D, n=0
• Input “impulse” is u = Ipδ(n) = diag(δ(n), . . . , δ(n)) h(n) = n−1
CA B, n > 0
• Each “impulse-response sample” h(n) is a p × q
matrix, in general • The transfer function is the z transform of the
impulse response:
• The impulse-response terms CAnB for n ≥ 0 are ∞ ∞
X X
called Markov parameters H(z) =
∆
h(n)z −n
= D+ CAn−1B z −n
n=0 n=1 " #
X∞
n
= D + z −1C z −1A B
n=0
17 18
Going back to the time domain, we have the linear A digital filter is often specified by its difference equation
discrete-time state-space model (Direct Form I). Second-order example:
y(n) = C x(n) + D u(n) 1 1
y(n) = u(n)+2u(n−1)+3u(n−2)− y(n−1)− y(n−2)
2 3
x(n + 1) = A x(n) + B u(n)
Every nth order difference equation can be reformulated
and its “impulse response”
( as a first order vector difference equation called the
D, n=0 “state space” (or “state variable”) representation:
h(n) = n−1
CA B, n > 0 x(n + 1) = A x(n) + B u(n)
y(n) = C x(n) + D u(n)
Given zero inputs and initial state x(0) 6= 0, we get
For the above example, we have, as we’ll show,
y x(n) = CAnx(0), n = 0, 1, 2, . . . . 1 1
∆ −2 −3
A = (state transition matrix)
By superposition (for LTI systems), the complete 1 0
response of a linear system is given by the sum of its ∆ 1
B = (matrix routing input to state variables)
forced response (such as the impulse response) and its 0
initial-condition response
∆ 3/2
C = (output linear-combination matrix)
8/3
∆
D = 1 (direct feedforward coefficient)
19 20
Converting to State-Space Form by Hand 3. Next, draw the strictly causal part in direct form II, as
shown below:
21 22
C = 1.5000 2.6667
D = 1
23 24
Matlab Documentation Similarity Transformations
The tf2ss and ss2tf functions are documented at A similarity transformation of a state-space system is a
http://www.mathworks.com/access/helpdesk/help/toolbox/signal/tf2ss.shtml
linear change of state variable coordinates:
∆
as well as within Matlab itself (e.g., help tf2ss). x(n) = Ex̃(n)
Related Signal Processing Toolbox functions include where
• x(n) = original state vector
• tf2sos — Convert digital filter transfer function
parameters to second-order sections form. • x̃(n) = state vector in new coordinates
• sos2ss — Convert second-order filter sections to • E = any invertible (one-to-one) matrix
state-space form. (linear transformation)
• tf2zp — Convert transfer function filter parameters Substituting x(n) = Ex̃(n) gives
to zero-pole-gain form. Ex̃(n + 1) = A Ex̃(n) + Bu(n)
• zp2ss — Convert zero-pole-gain filter parameters to y(n) = CEx̃(n) + Du(n)
state-space form. Premultiplying the first equation above by E−1 gives
x̃(n + 1) = E−1AE x̃(n) + E−1B u(n)
y(n) = (CE) x̃(n) + Du(n)
Define the transformed system matrices by
à = E−1AE
B̃ = E−1B
C̃ = CE
D̃ = D
25 26
eigenvectors: -
E = e1 e 2 · · · e N The second-order digital waveguide oscillator.
Then we have
∆ Note the assignments of unit-delay outputs to state
AE = λ1e1 λ2e2 · · · λN eN = EΛ variables x1(n) and x2(n).
where Λ =∆
diag(λ) is a diagonal matrix having We have
∆
T
λ = λ1 λ 2 · · · λ N along its diagonal. x1(n+1) = c[x1(n)+x2(n)]−x2(n) = c x1(n)+(c−1)x2(n)
Premultiplying by E−1 gives and
E−1AE = Λ x2(n+1) = x1(n)+c[x1(n)+x2(n)] = (1+c)x1(n)+c x2(n)
In matrix form, the state transition can be written as
Thus, E = e1 e2 · · · eN is a similarity transformation
that diagonalizes the system. x1(n + 1) c c−1 x1(n)
=
x2(n + 1) c+1 c x (n)
} 2
1
When there are repeated eigenvalues, there may be only one linearly independent eigenvector for the
repeated group. We will not consider this case and refer the interested reader to a Web search on “generalized | {z
eigenvectors,” e.g., http://en.wikipedia.org/wiki/Generalized eigenvector . A
29 30
λi = cos(θ) ± j sin(θ) = e±jθ (We already know λi from above, but this analysis
will find them by a different method.)
It is now clear that the system is a real sinusoidal
oscillator for −1 ≤ c ≤ 1, oscillating at normalized radian • Substitute the first into the second to eliminate λi:
∆ ∆
frequency ωcT = θ= arccos(c) ∈ [−π, π]. 1 + c + cηi = [c + ηi(c − 1)]ηi = cηi + ηi2(c − 1)
We determined the frequency of oscillation ωcT from the ⇒ 1 + c = ηi2(c − 1)
r
eigenvalues λi of A. To study this system further, we can c+1
⇒ ηi = ±
diagonalize A. For that we need the eigenvectors as well c−1
as the eigenvalues.
31 32
• We have found both eigenvectors: where fs denotes the sampling rate. Or,
r
1 1 ∆ c+1 c = cos(ωcT )
e1 = , e2 = , where η =
η −η c−1
The coefficient range c ∈ (−1, 1) corresponds to
They are linearly independent provided
frequencies f ∈ (−fs/2, fs/2).
η 6= 0 ⇔ c 6= −1 and finite provided c 6= 1.
We have shown that the example system oscillates
• The eigenvalues are then
r sinusoidally at any desired digital frequency ωc when
c+1 p
c = cos(ωcT ), where T denotes the sampling interval.
λi = c+ηi(c−1) = c± (c − 1)2 = c± c2 − 1
c−1
• Assuming |c| < 1, they can be written as The Diagonalized Example System
p
λi = c ± j 1 − c 2
We can now diagonalize our system using the similarity
• With c ∈ (−1, 1), define θ = arccos(c), i.e., transformation
∆ √
c = cos(θ) and 1 − c2 = sin(θ). 1 1
E = e1 e 2 =
• The eigenvalues become η −η
p q
c+1
λ1 = c + j 1 − c2 = cos(θ) + j sin(θ) = ejθ where η = c−1 .
p
λ2 = c − j 1 − c2 = cos(θ) − j sin(θ) = e−jθ We have only been working with the state-transition
as expected. matrix A up to now.
The system has no inputs so it must be excited by initial
We again found the explicit formula for the frequency of
conditions (although we could easily define one or two
oscillation:
θ inputs that sum into the delay elements).
ωc = = fs arccos(c),
T
33 34
We have two natural choices of output which are the the modal representation:
state variables x1(n) and x2(n), corresponding to the 1 1
choices C = [1, 0] and C = [0, 1]: y1(n) = [1, 0]x(n) = [1, 0] x̃(n)
η −η
∆
y1(n) = x1(n) = [1, 0] x(n) = [1, 1]x̃(n) = λn1 x̃1(0) + λn2 x̃2(0)
∆
y2(n) = x2(n) = [0, 1] x(n)
1 1
y2(n) = [0, 1]x(n) = [0, 1] x̃(n)
η −η
Thus, a convenient choice of the system C matrix is the
2 × 2 identity matrix. = [η, −η]x̃(n) = ηλn1 x̃1(0) − ηλn2 x̃2(0)
For the diagonalized system we obtain The output signal from the first state variable x1(n) is
jθ
e 0 y1(n) = λn1 x̃1(0) + λn2 x̃2(0)
à = E−1AE =
0 e−jθ = ejωcnT x̃1(0) + e−jωcnT x̃2(0)
B̃ = E−1B = 0
1 1
The initial condition x(0) = [1, 0]T corresponds to modal
C̃ = CE = E = initial state
η −η
−1 1 −1 −e −1 1 1/2
D̃ = 0 x̃(0) = E = =
q 0 2e −e 1 0 1/2
c+1
where θ = arccos(c) and η = c−1 as derived above.
For this initialization, the output y1 from the first state
We may now view our state-output signals in terms of variable x1 is simply
ejωcnT + e−jωcnT
y1(n) = = cos(ωcnT )
2
Similarly y2(n) is proportional to sin(ωcnT )
(“phase quadrature” output), with amplitude η.
35 36