0% found this document useful (0 votes)
25 views

StateSpace 4up

This document introduces linear state space models. It discusses: 1) State space models represent physical systems using state variables and model the evolution of states over time based on inputs. 2) The key property of state variables is that they contain all the information needed to determine the system's future behavior based on inputs. 3) Examples of state space models for a force-driven mass are presented to illustrate how position and momentum/velocity can be used as state variables.

Uploaded by

moin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views

StateSpace 4up

This document introduces linear state space models. It discusses: 1) State space models represent physical systems using state variables and model the evolution of states over time based on inputs. 2) The key property of state variables is that they contain all the information needed to determine the system's future behavior based on inputs. 3) Examples of state space models for a force-driven mass are presented to illustrate how position and momentum/velocity can be used as state variables.

Uploaded by

moin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

State Space Models

MUS420
Equations of motion for any physical system may be
Introduction to Linear State Space Models
conveniently formulated in terms of its state x(t):
Julius O. Smith III ([email protected])
Center for Computer Research in Music and Acoustics (CCRMA) Input Forces u(t)
Department of Music, Stanford University
Stanford, California 94305 ft
Model State x(t) ẋ(t)
February 5, 2019

R
Outline

• State Space Models ẋ(t) = ft[x(t), u(t)]


where
• Linear State Space Formulation
x(t) = state of the system at time t
• Markov Parameters (Impulse Response)
u(t) = vector of external inputs (typically driving forces)
• Transfer Function ft = general function mapping the current state x(t) and
• Difference Equations to State Space Models inputs u(t) to the state time-derivative ẋ(t)
• Similarity Transformations • The function ft may be time-varying, in general
• Modal Representation (Diagonalization) • This potentially nonlinear time-varying model is
• Matlab Examples extremely general (but causal)
• Even the human brain can be modeled in this form
1 2

State-Space History Key Property of State Vector

The key property of the state vector x(t) in the state


1. Classic phase-space in physics (Gibbs 1901)
space formulation is that it completely determines the
System state = point in position-momentum space
system at time t
2. Digital computer (1950s)
3. Finite State Machines (Mealy and Moore, 1960s) • Future states depend only on the current state x(t)
and on any inputs u(t) at time t and beyond
4. Finite Automata
• All past states and the entire input history are
5. State-Space Models of Linear Systems “summarized” by the current state x(t)
6. Reference: • State x(t) includes all “memory” of the system
Linear system theory: The state space
approach
L.A. Zadeh and C.A. Desoer
Krieger, 1979

3 4
Force-Driven Mass Example Forming Outputs

Consider f = ma for the force-driven mass: Any system output is some function of the state, and
possibly the input (directly):
• Since the mass m is constant, we can use momentum ∆
p(t) = m v(t) in place of velocity (more fundamental, y(t) = ot[x(t), u(t)]
since momentum is conserved )
ot y(t)
• x(t0) and p(t0) (or v(t0)) define the state of the mass
m at time t0
• In the absence of external forces f (t), all future states Input Forces u(t)
are predictable from the state at time t0: ft ẋ(t)
p(t) = p(t0) (conservation of momentum) Model State x(t)
Z
1 t
x(t) = x(t0) + p(τ ) dτ, t ≥ t0 R
m t0
• External forces f (t) drive the state to arbitrary points Usually the output is a linear combination of state
in state space: variables and possibly the current input:
Z t
∆ ∆
p(t) = p(t0) + f (τ ) dτ, t ≥ t0, p(t) = m v(t) y(t) = Cx(t) + Du(t)
t0
Z t where C and D are constant matrices of
1
x(t) = x(t0) + p(τ ) dτ, t ≥ t0 linear-combination coefficients
m t0

5 6

Numerical Integration State Definition


We need a state variable for the amplitude of each
Recall the general state-space model in continuous time:
physical degree of freedom
ẋ(t) = ft[x(t), u(t)]
Examples:
An approximate discrete-time numerical solution is
x(tn + Tn) = x(tn) + Tn ftn [x(tn), u(tn)] • Ideal Mass:
for n = 0, 1, 2, . . . . (Forward Euler ) 1
Energy = mv 2 ⇒ state variable = v(t)

2
Let gtn [x(tn), u(tn)] = x(tn) + Tn ftn [x(tn), u(tn)]: Note that in 3D we get three state variables
(vx, vy , vz )
u(tn)
• Ideal Spring:
gt 1
x(tn) x(tn + Tn) Energy = kx2 ⇒ state variable = x(t)
2
• Inductor: Analogous to mass, so current
• Capacitor: Analogous to spring, so charge
(or voltage = charge/capacitance)
z −Tn
• Resistors and dashpots need no state variables
• This is a simple example of numerical integration for assigned—they are stateless (no “memory”)
solving the ODE
• ODE can be nonlinear and/or time-varying
• The sampling interval Tn may be fixed or adaptive
7 8
State-Space Model of a Force-Driven Mass Force-Driven Mass Reconsidered

For the simple example of a mass m driven by external Why not include position x(t) as well as velocity v(t) in
force f along the x axis: the state-space model for the force-driven mass?

v(t) 
ẋ(t)
 
0 1

x(t)
 
0

x=0 = + f (t)
v̇(t) 0 0 v(t) 1/m
f (t) m
We might expect this because we know from before that
the complete physical state of a mass consists of its
velocity v and position x!
• There is only one energy-storage element (the mass),
and it stores energy in the form of kinetic energy
• Therefore, we should choose the state variable to be
velocity v = ẋ (or momentum p = mv = mẋ)
• Newton’s f = ma readily gives the state-space
formulation:
1
v̇ = f
m
or
ṗ = f
• This is a first-order system (no vector needed)

9 10

Force-Driven Mass Reconsidered and Dismissed State Variable Summary

• Position x does not affect stored energy • State variable = physical amplitude for some
1 energy-storing degree of freedom
Em = m v 2
2 • Mechanical Systems:
• Velocity v(t) is the only energy-storing degree of State variable for each
freedom – ideal spring (linear or rotational)
• Only velocity v(t) is needed as a state variable – point mass (or moment of inertia)
• The initial position x(0) can be kept “on the side” to times the number of dimensions in which it can move
enable computation of the complete state in • RLC Electric Circuits:
position-momentum space: State variable for each capacitor and inductor
Z t
x(t) = x(0) + v(τ ) dτ • In Discrete-Time:
0 State variable for each unit-sample delay
• In other words, the position can be derived from the • Continuous- or Discrete-Time:
velocity history without knowing the force history Dimensionality of state-space = order of the system
• Note that the external force f (t) can only drive v̇(t). (LTI systems)
It cannot drive ẋ(t) directly:
      
ẋ(t) 0 1 x(t) 0
= + f (t)
v̇(t) 0 0 v(t) 1/m

11 12
Discrete-Time Linear State Space Zero-State Impulse Response
Models (Markov Parameters)
For linear, time-invariant systems, a discrete-time
state-space model looks like a vector first-order Linear State-Space Model:
finite-difference model: y(n) = Cx(n) + Du(n)
x(n + 1) = A x(n) + B u(n) x(n + 1) = Ax(n) + Bu(n)
y(n) = C x(n) + D u(n)
The zero-state impulse response of a state-space model is
where ∆
easily found by direct calculation: Let x(0) = 0 and
• x(n) ∈ RN = state vector at time n u = Ipδ(n) = diag(δ(n), . . . , δ(n)). Then
• u(n) = p × 1 vector of inputs h(0) = Cx(0)B + D Ipδ(0) = D
• y(n) = q × 1 output vector x(1) = A x(0) + B Ipδ(0) = B
• A = N × N state transition matrix h(1) = CB
• B = N × p input coefficient matrix x(2) = A x(1) + B δ(1) = AB
h(2) = CAB
• C = q × N output coefficient matrix
x(3) = A x(1) + B δ(1) = A2B
• D = q × p direct path coefficient matrix
h(3) = CA2B
The state-space representation is especially powerful for ..
• multi-input, multi-output (MIMO) linear systems h(n) = CAn−1B , n>0
• time-varying linear systems
(every matrix can have a time subscript n)
13 14

Zero-State Impulse Response (Markov Linear State-Space Model


Parameters) Transfer Function

Thus, the “impulse response” of the state-space model • Recall the linear state-space model:
can be summarized as
( y(n) = C x(n) + D u(n)
D, n=0
h(n) = n−1 x(n + 1) = A x(n) + B u(n)
CA B, n > 0
and its “impulse response”
• Initial state x(0) assumed to be 0 (
D, n=0
• Input “impulse” is u = Ipδ(n) = diag(δ(n), . . . , δ(n)) h(n) = n−1
CA B, n > 0
• Each “impulse-response sample” h(n) is a p × q
matrix, in general • The transfer function is the z transform of the
impulse response:
• The impulse-response terms CAnB for n ≥ 0 are ∞ ∞
X X 
called Markov parameters H(z) =

h(n)z −n
= D+ CAn−1B z −n
n=0 n=1 " #
X∞
 n
= D + z −1C z −1A B
n=0

The closed-form sum of a matrix geometric series


gives
H(z) = D + C (zI − A)−1 B
(a p × q matrix of rational polynomials in z)
15 16
• If there are p inputs and q outputs, then H(z) is a System Poles
p × q transfer-function matrix
(or “matrix transfer function”)
Above, we found the transfer function to be
• Given transfer-function coefficients, many digital filter
H(z) = D + C (zI − A)−1 B
realizations are possible (different computing
structures) The poles of H(z) are the same as those of

Example: (p = 3, q = 2) Hp(z) = (zI − A)−1
By Cramer’s rule for matrix inversion, the denominator
polynomial for (zI − A)−1 is given by the determinant:
 
1 − z −1 −1

d(z) = |zI − A|
z −1 1 + z

 1 − 0.5z −1 

H(z) =   where |Q| denotes the determinant of the square matrix
 2 + 3z −1 1 + z −1 (1 − z −1)2 
Q (also written as det(Q).)
1 − 0.1z −1 1 − z −1 (1 − 0.1z −1)(1 − 0.2z −1)
• In linear algebra, the polynomial d(z) = |zI − A| is
called the characteristic polynomial for the matrix A
• The roots of the characteristic polynomial are called
the eigenvalues of A
• Thus, the eigenvalues of the state transition matrix
A are the system poles
• Each mode of vibration gives rise to a pole pair

17 18

Initial-Condition Response Difference Equation to State Space Form

Going back to the time domain, we have the linear A digital filter is often specified by its difference equation
discrete-time state-space model (Direct Form I). Second-order example:
y(n) = C x(n) + D u(n) 1 1
y(n) = u(n)+2u(n−1)+3u(n−2)− y(n−1)− y(n−2)
2 3
x(n + 1) = A x(n) + B u(n)
Every nth order difference equation can be reformulated
and its “impulse response”
( as a first order vector difference equation called the
D, n=0 “state space” (or “state variable”) representation:
h(n) = n−1
CA B, n > 0 x(n + 1) = A x(n) + B u(n)
y(n) = C x(n) + D u(n)
Given zero inputs and initial state x(0) 6= 0, we get
For the above example, we have, as we’ll show,
y x(n) = CAnx(0), n = 0, 1, 2, . . . .  1 1
∆ −2 −3
A = (state transition matrix)
By superposition (for LTI systems), the complete 1 0
 
response of a linear system is given by the sum of its ∆ 1
B = (matrix routing input to state variables)
forced response (such as the impulse response) and its 0
initial-condition response  
∆ 3/2
C = (output linear-combination matrix)
8/3

D = 1 (direct feedforward coefficient)

19 20
Converting to State-Space Form by Hand 3. Next, draw the strictly causal part in direct form II, as
shown below:

1. First, determine the filter transfer function H(z). In


the example, the transfer function can be written, by 1
inspection, as
x(n) y(n)
1 + 2z −1 + 3z −2 x1 (n + 1)
H(z) =
1 + 21 z −1 + 13 z −2 z −1
x1 (n)
2. If h(0) 6= 0, we must “pull out” the parallel delay-free
−1/2 3/2
path:
b1z −1 + b2z −2 z −1
H(z) = d0 + x2 (n)
1 + 12 z −1 + 31 z −2
Obtaining a common denominator and equating −1/3 8/3
numerator coefficients yields
d0 = 1 It is important that the filter representation be
1 3 canonical with respect to delay, i.e., the number of
b1 = 2 − = delay elements equals the order of the filter
2 2
1 8 4. Assign a state variable to the output of each delay
b2 = 3 − =
3 3 element (see figure)
The same result is obtained using long or synthetic 5. Write down the state-space representation by
division inspection. (Try it and compare to answer above.)

21 22

Matlab Conversion from Direct-Form to Previous Example Using Matlab


State-Space Form

>> num = [1 2 3]; % transfer function numerator


Matlab has extensive support for state-space models,
>> den = [1 1/2 1/3]; % denominator coefficients
such as
>> [A,B,C,D] = tf2ss(num,den)
• tf2ss - transfer-function to state-space conversion
A =
• ss2tf - state-space to transfer-function conversion -0.5000 -0.3333
1.0000 0
Note that these utilities are documented primarily for
continuous-time systems, but they are also used for
B =
discrete-time systems.
1
Let’s repeat the previous example using Matlab: 0

C = 1.5000 2.6667

D = 1

>> [N,D] = ss2tf(A,B,C,D)

N = 1.0000 2.0000 3.0000

D = 1.0000 0.5000 0.3333

23 24
Matlab Documentation Similarity Transformations

The tf2ss and ss2tf functions are documented at A similarity transformation of a state-space system is a
http://www.mathworks.com/access/helpdesk/help/toolbox/signal/tf2ss.shtml
linear change of state variable coordinates:

as well as within Matlab itself (e.g., help tf2ss). x(n) = Ex̃(n)
Related Signal Processing Toolbox functions include where
• x(n) = original state vector
• tf2sos — Convert digital filter transfer function
parameters to second-order sections form. • x̃(n) = state vector in new coordinates
• sos2ss — Convert second-order filter sections to • E = any invertible (one-to-one) matrix
state-space form. (linear transformation)
• tf2zp — Convert transfer function filter parameters Substituting x(n) = Ex̃(n) gives
to zero-pole-gain form. Ex̃(n + 1) = A Ex̃(n) + Bu(n)
• zp2ss — Convert zero-pole-gain filter parameters to y(n) = CEx̃(n) + Du(n)
state-space form. Premultiplying the first equation above by E−1 gives
 
x̃(n + 1) = E−1AE x̃(n) + E−1B u(n)
y(n) = (CE) x̃(n) + Du(n)
Define the transformed system matrices by
à = E−1AE
B̃ = E−1B
C̃ = CE
D̃ = D
25 26

We can now write State Space Modal Representation


x̃(n + 1) = Ãx̃(n) + B̃u(n)
y(n) = C̃x̃(n) + Du(n) Diagonal state transition matrix = modal representation:
      
x1 (n + 1) λ1 0 0 ··· 0 x1 (n) b1
The transformed system describes the same system in 
 x2 (n + 1)



 0 λ2 0 · · · 0

  x2 (n)  
 
b2


new state-variable coordinates 


..
.


 .
 =  ..

.. . . .
.
..
.
..
.



...
 
+
 
..
.

 u(n)

   0 0 0 λN −1 0    
Let’s verify that the transfer function has not changed:  N −1 + 1)
x (n   x
  N −1 (n)   bN −1 
xN (n + 1) 0 0 0 0 λN xN (n) bN
H̃(z) = D̃ + C̃(zI − Ã)−1B̃ y(n) = Cx(n) + Du(n)
−1 −1
= D + (CE) zI − E−1AE (E B)
  −1−1 (always possible when there are no repeated poles)
−1
= D + C E zI − E AE E B
−1 The N complex modes are decoupled :
= D + C (zI − A) B = H(z)
x1(n + 1) = λ1x1(n) + b1u(n)
• Since the eigenvalues of A are the poles of the x2(n + 1) = λ2x2(n) + b2u(n)
system, it follows that the eigenvalues of ..
à = E−1AE are the same. In other words, xN (n + 1) = λN xN (n) + bN u(n)
eigenvalues are unaffected by a similarity y(n) = c1x1(n) + c2x2(n) + · · · + cN xN (n) + Du(n)
transformation.
That is, the diagonal state-space system consists of N
• The transformed Markov parameters, C̃ÃnB̃, are parallel one-pole systems:
also unchanged since they are given by the inverse z
transform of the transfer function H̃(z). However, it H(z) = C(zI − A)−1B + D
is also easy to show this by direct calculation. N
X cibiz −1
= D+
i=1
1 − λiz −1
27 28
Finding the (Diagonalized) Modal Representation State-Space Analysis Example:
The Digital Waveguide Oscillator
The ith eigenvector ei of a matrix A has the defining
Let’s use state-space analysis to determine the frequency
property
of oscillation of the following system:
Aei = λiei,
where λi is the associated eigenvalue. Thus, the
c
eigenvector ei is invariant under the linear transformation
A to within a (generally complex) scale factor λi. x1 (n) x2 (n + 1)
An N × N matrix A typically has N eigenvectors. Let’s 1 z −1 z −1

make a similarity-transformation matrix E out of the N x1 (n + 1) x2 (n)

eigenvectors:   -
E = e1 e 2 · · · e N The second-order digital waveguide oscillator.
Then we have
 ∆ Note the assignments of unit-delay outputs to state
AE = λ1e1 λ2e2 · · · λN eN = EΛ variables x1(n) and x2(n).
where Λ =∆
diag(λ) is a diagonal matrix having We have

 T
λ = λ1 λ 2 · · · λ N along its diagonal. x1(n+1) = c[x1(n)+x2(n)]−x2(n) = c x1(n)+(c−1)x2(n)
Premultiplying by E−1 gives and
E−1AE = Λ x2(n+1) = x1(n)+c[x1(n)+x2(n)] = (1+c)x1(n)+c x2(n)
  In matrix form, the state transition can be written as
Thus, E = e1 e2 · · · eN is a similarity transformation     
that diagonalizes the system. x1(n + 1) c c−1 x1(n)
=
x2(n + 1) c+1 c x (n)
} 2
1
When there are repeated eigenvalues, there may be only one linearly independent eigenvector for the
repeated group. We will not consider this case and refer the interested reader to a Web search on “generalized | {z
eigenvectors,” e.g., http://en.wikipedia.org/wiki/Generalized eigenvector . A
29 30

or, in vector notation, Eigenstructure of A


x(n + 1) = A x(n)
The defining property of the eigenvectors ei and
The poles of the system are given by the eigenvalues of eigenvalues λi of A is the relation
A, which are the roots of its characteristic polynomial.
Aei = λiei , i = 1, 2,
That is, we solve
|λiI − A| = 0 which expands to
    
for λi, i = 1, 2, . . . , N , or, for our N = 2 problem, c c−1 1 λi
= .
λi − c 1 − c c+1 c ηi λi η i
0= = (λi−c)2+(1−c)(1+c) = λi2−2λic+1
−c − 1 λi − c
• The first element of ei is normalized arbitrarily to 1
Using the quadratic formula, the two solutions are found
• We have two equations in two unknowns λi and ηi:
to be p p
λi = c ± c 2 − 1 = c ± j 1 − c 2 c + ηi(c − 1) = λi
Defining c = cos(θ), we obtain the simple formula (1 + c) + cηi = λiηi

λi = cos(θ) ± j sin(θ) = e±jθ (We already know λi from above, but this analysis
will find them by a different method.)
It is now clear that the system is a real sinusoidal
oscillator for −1 ≤ c ≤ 1, oscillating at normalized radian • Substitute the first into the second to eliminate λi:
∆ ∆
frequency ωcT = θ= arccos(c) ∈ [−π, π]. 1 + c + cηi = [c + ηi(c − 1)]ηi = cηi + ηi2(c − 1)
We determined the frequency of oscillation ωcT from the ⇒ 1 + c = ηi2(c − 1)
r
eigenvalues λi of A. To study this system further, we can c+1
⇒ ηi = ±
diagonalize A. For that we need the eigenvectors as well c−1
as the eigenvalues.
31 32
• We have found both eigenvectors: where fs denotes the sampling rate. Or,
    r
1 1 ∆ c+1 c = cos(ωcT )
e1 = , e2 = , where η =
η −η c−1
The coefficient range c ∈ (−1, 1) corresponds to
They are linearly independent provided
frequencies f ∈ (−fs/2, fs/2).
η 6= 0 ⇔ c 6= −1 and finite provided c 6= 1.
We have shown that the example system oscillates
• The eigenvalues are then
r sinusoidally at any desired digital frequency ωc when
c+1 p
c = cos(ωcT ), where T denotes the sampling interval.
λi = c+ηi(c−1) = c± (c − 1)2 = c± c2 − 1
c−1
• Assuming |c| < 1, they can be written as The Diagonalized Example System
p
λi = c ± j 1 − c 2
We can now diagonalize our system using the similarity
• With c ∈ (−1, 1), define θ = arccos(c), i.e., transformation
∆ √  
c = cos(θ) and 1 − c2 = sin(θ).   1 1
E = e1 e 2 =
• The eigenvalues become η −η
p q
c+1
λ1 = c + j 1 − c2 = cos(θ) + j sin(θ) = ejθ where η = c−1 .
p
λ2 = c − j 1 − c2 = cos(θ) − j sin(θ) = e−jθ We have only been working with the state-transition
as expected. matrix A up to now.
The system has no inputs so it must be excited by initial
We again found the explicit formula for the frequency of
conditions (although we could easily define one or two
oscillation:
θ inputs that sum into the delay elements).
ωc = = fs arccos(c),
T
33 34

We have two natural choices of output which are the the modal representation:
state variables x1(n) and x2(n), corresponding to the 1 1
 
choices C = [1, 0] and C = [0, 1]: y1(n) = [1, 0]x(n) = [1, 0] x̃(n)
η −η

y1(n) = x1(n) = [1, 0] x(n) = [1, 1]x̃(n) = λn1 x̃1(0) + λn2 x̃2(0)

y2(n) = x2(n) = [0, 1] x(n)  
1 1
y2(n) = [0, 1]x(n) = [0, 1] x̃(n)
η −η
Thus, a convenient choice of the system C matrix is the
2 × 2 identity matrix. = [η, −η]x̃(n) = ηλn1 x̃1(0) − ηλn2 x̃2(0)

For the diagonalized system we obtain The output signal from the first state variable x1(n) is
 jθ 
e 0 y1(n) = λn1 x̃1(0) + λn2 x̃2(0)
à = E−1AE =
0 e−jθ = ejωcnT x̃1(0) + e−jωcnT x̃2(0)
B̃ = E−1B = 0 
1 1
 The initial condition x(0) = [1, 0]T corresponds to modal
C̃ = CE = E = initial state
η −η       
−1 1 −1 −e −1 1 1/2
D̃ = 0 x̃(0) = E = =
q 0 2e −e 1 0 1/2
c+1
where θ = arccos(c) and η = c−1 as derived above.
For this initialization, the output y1 from the first state
We may now view our state-output signals in terms of variable x1 is simply
ejωcnT + e−jωcnT
y1(n) = = cos(ωcnT )
2
Similarly y2(n) is proportional to sin(ωcnT )
(“phase quadrature” output), with amplitude η.
35 36

You might also like