0% found this document useful (0 votes)
192 views

Control System PPK

1. State space analysis is an excellent method for analyzing control systems using state variables rather than transfer functions. It can handle nonlinear, time-varying, and multi-input multi-output systems. 2. The state of a dynamic system is defined as the minimum set of variables needed to describe the system's future behavior given its current values. These state variables are arranged into a state vector and the analysis is done in the state space. 3. The state space model of a system contains state equations that describe how the state variables change over time and output equations relating the outputs to the states and inputs. It provides a more complete description of a system compared to transfer functions.

Uploaded by

P Praveen Kumar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
192 views

Control System PPK

1. State space analysis is an excellent method for analyzing control systems using state variables rather than transfer functions. It can handle nonlinear, time-varying, and multi-input multi-output systems. 2. The state of a dynamic system is defined as the minimum set of variables needed to describe the system's future behavior given its current values. These state variables are arranged into a state vector and the analysis is done in the state space. 3. The state space model of a system contains state equations that describe how the state variables change over time and output equations relating the outputs to the states and inputs. It provides a more complete description of a system compared to transfer functions.

Uploaded by

P Praveen Kumar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

STATE SPACE ANALYSIS

1. Introduction
The analysis and performance of control systems as given in earlier
chapters is based on transfer function and graphical approach such as
Bode, Nyquist and root locus plot. The mathematical model of the control
system being obtained in frequency domain.

State space analysis is an excellent method for the design and analysis of
control systems. The conventional and old method for the design and
analysis of control systems is the transfer function method.

The basis of transfer function is

(i) The initial conditions in system equation are considered as zero.


(ii) Applicable only to single input single output systems.
(iii) Applicable only to linear time invariant system equations.
(iv) The analysis of system output for a specified input is obtained. No
information regarding internal state of the system is available.

Advantages of state variable analysis


(i) It is applicable to dynamic systems (Linear/Non-linear/Time
Variant/Time Invariant)
(ii) The initial conditions pertaining to the system are taken into
consideration.
(iii) The analysis is carried out in time domain.
(iv) The mathematical model covers both single input single output and
multi input multi output systems
(v) It is possible to optimise the systems useful for optimal design.
(vi) The state space model gives complete description of the system.

Disadvantages of state variable analysis


(i) The techniques are complex.
(ii) Many computations are required.

Table 1 summarizes the comparison of the transfer function model and state
variable model.
Table 1. Transfer function model versus state variable model

Transfer function model State variable model


(i) This model is applicable for (i) This model is applicable for
linear time invariant system linear, non-linear, time
(LTI System) variant, time invariant,
discrete data and
continuous data control
system.
(ii) Initial conditions are (ii)Initial conditions are considered.
assumed as zero.
(iii) It is not suitable for multi (iii)It is suitable for multi input multi
input multi output system. output system
(iv) The transfer function (iv)State variable approach is a
approach is a graphical purely analytical approach.
approach.
(v) Error is more. (v)Error is less
(vi) It does not provide any (vi)The internal variables known as
information regarding the state variables can be obtained using
internal state of the system. this approach.
(vii) Both time domain and (vii)It is purely a time domain
frequency domain approach.
approaches are used.
(viii) It is not convenient for (viii)It is convenient for digital
digital computer simulation. computer simulation.
(ix) The transfer function model (ix)The state variable model of a
of a system is unique. system is not unique.

2. Basic Concepts
State Variables: The smallest set of variables which determine the state of a
dynamic system are called state variables. It is not necessary that the state
variables be physically measurable or observable quantities. The knowledge
of voltage across the capacitor at t=0, forms a state variable. Likewise, initial
current through an inductor is also treated as a state variable.

State: The state of a dynamic system is the smallest set of variables such
that the knowledge of these variables at time t=t0 (Initial condition), together
with the knowledge of input for t≥ t0 , completely determines the behaviour
of the system for any time 𝑡≥ t0 . In other words, a compact and concise
representation of the past history of the system is known as the state of the
system.

State vector: If n state variables are needed to completely describe the


behaviour of a given system, then these n state variables can be considered
the n components of a vector X. Such a vector is called a state vector.
State space: The n-dimensional space whose co-ordinate axes consists of
the x1 axis, x2 axis,.... xn axis, where x1 , x2 ,..... xn are state variables: is
called a state space.

3. State Model
The state model of a dynamic system consists of three variables such as
input variables, output variables and state variables.

u1 y1

u2 y2
MIMO
Input Variables Output Variables
Control System

yp
um

State Variables
x1 x2 xn

Fig 1. State space representation of a system

Let us consider a multi input & multi output system is having

m input variables 𝑢1(t), 𝑢2(𝑡),…….𝑢m(𝑡)


p output variables y1(t), y2(𝑡),…….yp(𝑡)
n state variables x1(t), x2(𝑡),…….xm(𝑡)

The state model consists of two equations such as state equation and output
equation.

X (t) = AX(t) + BU(t) …………….State Equation

Y (t) = CX(t) + DU(t) …………… Output Equation

A matrix is state matrix of size (n×n)

B matrix is input matrix of size (n×m)

C matrix is output matrix of size (p×n)


D matrix is the direct transmission matrix of size (p×m)

X(t) is the state vector of size (n×1)

Y(t) is the output vector of size (p×1)

U(t) is the input vector of size (m×1)

The block diagram representation of state model is shown in Fig 2.

D(t)

x(t) x(t) C(t) +


u(t)
B(t) +
+  dt +
y(t)

A(t)

Fig 2. Block diagram representation of state model

3.1 State Equation


The state equations are a set of first order differential equations
where each first derivative of the state variable is expressed as a
linear combination of state variables and input variables, i.e.

dx1
= a11 x1 + a12 x2 + + a1n xn + b11u1 + b12u2 + + b1mum
dt
dx2
= a21 x1 + a22 x2 + + a2 n xn + b21u1 + b22u2 + + b2 mum
dt ………………(1)

dxn
= an1 x1 + an 2 x2 + + ann xn + bn1u1 + bn 2u2 + + bnmum
dt

In matrix form equation (1) can be written as

 x1   a11 a12 a1n   x1   b11 b12 b1m  u1 


 x  a a22 a2 n   x2  b21 b22 b2 m  u2 
 2  =  21 + ………………..(2)
       
       
 xn   an1 an 2 ann   xn  bn1 bn 2 bnm  um 

Thus, in compact form


X (t) = AX(t) + BU(t) ..............................................(3)

3.2 Output Equation


The output equations are a set of equations where the output
variables are expressed as a linear combination of state variables
and input variables, i.e.

 y1   c11 c12 c1n   x1   d11 d12 d1m  u1 


 y  c c2 n   x2   d 21 d 2 m  u2 
 2  =  21 c22  +
d 22
  ………………..(4)
       
       
 y p  c p1 c p 2 c pn   xn   d p1 d p2 d pm  um 

Now, in compact form

Y (t) = CX(t) + DU(t) ………………………………..(5)

Equation (3) represents the dynamic equation or state equation. Equation (5)
is called the output equation. Equations (3) and (5) represent state model.

Example 1:
Write the state equation for the network shown in Fig 3.

4 vc1 2H vc 2
iL

+ 0.25 F
vi 0.5 F 1 is
-

Fig 3 Circuit of Example 1

Define the state variables as current through the inductor and voltage
across the capacitors. Write two node equations containing capacitors and a
loop equation containing the inductor. The state variables are vc1, vc2 , and iL .

Node equations are

dvc1 v −v
0.25 + iL + c1 i = 0  vc = −vc1 − 4iL + vi
dt 4
dvc 2 v
0.5 − iL + c 2 − is = 0  vc 2 = 2iL − 2vc 2 + 2is
dt 1

and the loop equation is

diL
2 + vc 2 − vc1 = 0  iL = 0.5vc1 − 0.5vc 2
dt

or

 vc1   −1 0 −4   vc1  1 0 
v  =  0 v 
 c2   −2 2   vc 2  + 0 2   i 
 iL   0.5 −0.5 0   iL  0 0   s 
i

4. State Space Representation


4.1 Phase variables method or Direct decomposition method

Consider following nth order LTI system relating the output y(t) to the input
u(t).

y n + a1 y n−1 + a2 y n−2 + + an−1 y1 + an y = bu ………………. (6)

The phase variables are defined as those particular state variables which are
obtained from one of the system variables & its (n-1) derivatives. Often the
variables used are the system output & the remaining state variables are
then derivatives of the output.

Let us define the state variables as

x1 = y
dy dx
x2 = =
dt dt
dy dx2
x3 = = ………………………(7)
dt dt

dxn −1
xn = y n −1 =
dt

Equation (7) can be written as


x1 = x2
x2 = x3
………………….. (8)
xn −1 = xn
xn = −an x1 − an −1 x2 − − a1 xn + bu

Writing Equation (8) in vector matrix form

X (t) = AX(t) + BU(t)

Where

 0 1 0 0  0 
 x1   0  0 
x   0 1 0   
X =  2 , A =   , B= 
     
   0 0 0 1  0 
 n  n1
x
 − an − an −1 − an − 2 − a1  nn b 

Output equation can be written as

Y (t) = CX(t)

Where C = 1 0 01n

The matrix A is called matrix in phase variable form. Such form of matrix A
is also called Bush form or Companion form. Matrix A has following
properties

(i) Upper off diagonal elements (i.e. row parallel to the principal
diagonal) contain all 1.
(ii) All other elements are zeros except last row.
(iii) Last row elements are the negative of the coefficients in the original
differential equation.

4.2 Canonical form or Parallel decomposition technique

Consider the dynamics of a system which can be written by the following nth
order differential equation

y n + a1 y n−1 + a2 y n−2 + + an−1 y + an y = b0u n + b1u n−1 + b2u n−2 + + bn−1u + bnu …………(9)

Where u is the input and y is the output. This equation can also be written
as
Y(s) b0 s n + b1s n −1 + + bn −1s + bn
= ……………………………………..(10)
U (s) s n + a1s n −1 + + an −1s1 + an

In the next subsection, the state space representations of the system


defined by equation (9) or equation (10) are presented in controllable
canonical form, observable canonical form, diagonal canonical form and
Jordan canonical form.

4.2.1 Controllable Canonical Form (CCF)

The state space representation of the system in controllable canonical form


is given by

 x1   0 1 0 0   x1  0 
x    
 2   0 0 1 0   x2  0 
 =   +  u ………………….(11)
      
 xn −1   0 0 0 1   xn −1  0 
 x   − an − an −1 − an − 2 − a1   xn  1 
 n 

 x1 
x 
y =  (bn − anb0 ) (bn −1 − an −1b0 ) (b1 − a1b0 )   2  + b0u …………………(12)
 
 
 xn 

This form of representation is also called first companion form. The CCF is
important in discussing the pole-placement approach to the control system
design.

4.2.2 Observable Canonical Form (OCF)

The state space representation of the system in observable canonical form is


given by

 x1  0 0 0 − an   x1  bn − anb0 
x      
 2  1 0 0 − an −1   x2  bn −1 − an −1b0 
 x3  = 0 1 0 − an − 2   x3  + bn − 2 − an − 2b0  u …………………..(13)
      
      
 x  0
 n 0 1 − a1   xn  b1 − a1b0 

 x1 
x 
 2
y = 0 0 0 1  x3  + b0u ………………………..(14)
 
 
x 
 n

This form of representation is also called second companion form. The ( n  n )


state matrix of the state equation (11) is transpose of that of the state
equation (13).

4.2.3 Diagonal Canonical Form

Consider the transfer function of the system represented by equation (10).


Here, the denominator polynomial involves only distinct roots. Now, equation
(10) can be written as

Y(s) b0 s n + b1s n −1 + + bn −1s + bn


=
U (s) ( s + p1 )( s + p2 ) ( s + pn )
K1 K2 Kn …………………….(15)
= b0 + + + +
s + p1 s + p2 s + pn

The state space representation of the system in diagonal canonical form is


given by

 x1   − p1 0 0   x1  1
x   0 − p2 0   x2  1
 2 =  +  u …………………………….(16)
      
      
 xn   0 0 − pn   xn  1

 x1 
x 
y =  K1 K2 K n   2  + b0u
 
 
 xn 
……………………………………(17)

Note: Matrix A is a diagonal matrix and diagonal elements are poles. C matrix
contains the residue of the poles.

4.2.4 Jordan Canonical Form

In this case the denominator polynomial of the transfer function of equation


(10) involves multiple roots. Let us consider an example where pi’s are
different from one another except that the first three pi’s are equal, or
p1=p2=p3 . Then the equation (10) can be written as

Y (s) b0 s n + b1s n −1 + + bn −1s + bn


= …………………………….(18)
U (s) ( s + p1 )3 ( s + p4 )( s + p5 ) ( s + pn )

Using partial fraction expansion of equation (17), we have

Y (s) K1 K2 K3 K4 Kn
= b0 + + + + + + ………….(19)
( s + p1 ) ( s + p1 ) ( s + p1 ) ( s + p4 ) ( s + pn )
3 2
U (s)

The state space representation of the system in Jordan canonical form is


given by

 − p1 1 0 0 0 
 x1   0 − p1 1  x  0
 1  
x  
 2  0 0 − p1 0 0   x2  0 
 x3  =   
     x3  + 1  u ……………………….(20)
 
   0 0 − p4 0    
x    
 n   xn  1 
 0 − pn 
 0 0 0

 x1 
x 
y =  K1 K2 K n   2  + b0u
  ………………………………(20a)
 
 xn 

Example 2:
Obtain the state equation in phase variable form for the following differential
equation.

d3y d2y dy
2 3 + 4 2 + 6 + 8 y = 10u (t )
dt dt dt

Solution:

The differential equation is third order, thus there are three state variables
as follows

x1 = y, x2 = y, x3 = y

and the derivatives are


x1 = x2 , x2 = x3 , and x3 = −4 x1 − 3x2 − 2 x3 + 5u (t )

Or in matrix form

 x1   0 1 0   x1  0 
 x  =  0 0 1   x  + 0  5u (t )
 2   2  
 x3   −4 −3 −2   x3  1 

 x1 
y = 1 0 0  x2 
 x3 

Simulation Diagram

Equation (8) indicates that state variables are determined by integrating the
corresponding state equation. A diagram known as the simulation diagram
can be constructed to model the given differential equations. The basic
element of the simulation diagram is the integrator. The first equation in (8)
is

x1 = x2
Integrating, we have

x1 =  x2 dx

The above integral is represented by the time-domain diagram shown in Fig


4 (a) similar to the block diagram or the time-domain diagram shown in Fig
4 (b) similar to the signal flow graph.

x2 (t ) 1 x1 (t ) s −1
x2 (t ) x1 (t )
s
(b)
(a)

Fig 4 Simulation diagram for integrator


It is important to know that although the symbol 1/ s is used for integration,
the simulation diagram is a time domain representation. The number of
integrators is equal to the number of state variables. For example, for the
state equation in Example 2 we have three integrators in cascade, the three
state variables are assigned to the output of each integrator as shown in
Figure 3. The last equation in (8) is represented via a summing point and
feedback paths. Completing the output equation, the simulation diagram
known as phase-variable control canonical form is obtained.

5u (t ) x3 1 x3 1 x2 1 x1 y 1 s −1 s −1 s −1 1
1 5u (t ) y
- s s s x3 -2 x3 x2 x1
- -
2 OR -3
3 -4
4
(a) (b)

Fig 5 Simulation diagram for Example 2

Transfer Function to State-Space Conversion Direct


Decomposition

Consider the transfer function of a third-order system


Y (s) b s 2 + b s + b0
= 3 2 2 1 ……………………………………….(20b)
U ( s ) s + a2 s + a1s + a0

where the numerator degree is lower than that of the denominator. The
above transfer function is decomposed into two blocks as shown in Fig 6.

U (s) 1 W (s) Y ( s)
b2 s 2 + b1s + b0
s + a2 s + a1s + a0
3 2

Fig 6 Transfer function (20b) arranged in cascade form


Denoting the output of the first block as W ( s ) , we have

U ( s)
W ( s) = and Y (s) = b2 s 2W (s) + b1sW (s) + b0W (s)
s + a2 s 2 + a1s + a0
3

or

s3W (s) = −a2 s 2W (s) − a1sW (s ) − a0W (s ) + U (s )

This results in the following time-domain equation

w = −a2 w − a1w − a0 w + u (t ) and y(t ) = b2 w + b1w + b0 w

From the above expression we see that w has to go through three


integrators to get w as shown in Fig 7. Completing the above equations
results in the phase-variable control canonical simulation diagram.

b2
b1
+ +
u (t ) w 1 w 1 w 1 w + y
x1 b0
- s x3 s x2 s
- -
a2
a1
a0

Fig 7 Phase variable control canonical simulation diagram.

The above simulation in block diagram form is suitable for SIMULINK


diagram construction. You may find it easier to construct the simulation
diagram similar to the signal flow graph as shown in Fig 8.

b2
b1
−1
u (t )
1 w s −1 w s −1 w s w b0 1 y
−a2 x3 x2 x1
−a1
−a0

Fig 8 Phase variable control canonical simulation diagram.


In order to write the state equation, the state variables x1 (t ) , x2 (t ) , and x3 (t ) are
assigned to the output of each integrator from the right to the left. Next an
equation is written for the input of each integrator. The results are

x1 = x2

x2 = x3

x3 = −a0 x1 − a1 x2 − a0 x1 + u (t )

and the output equation is

y = b0 x1 + b1x2 + b3 x3

or in matrix form

 x1   0 1 0   x1  0 
x  =  0 0 1   x2  + 0  u (t ) ………………. (20c)
 2 
 x3   −a0 −a1 −a2   x3  1 

 x1 
y = 1 0 0  x2 
 x3 

It is important to note that the Mason’s gain formula can be applied to the
simulation diagram in Fig. 8 to obtain the original transfer function. Indeed
 of Mason’s gain formula is the characteristic equation. Also, the
determinant of sI − A matrix in (20c), results in the characteristics equation.
Keep in mind that there is not a unique state space representation for a
given transfer function.

Example 3:
Draw the simulation diagram and find the state-space representation for the
following transfer function

Y ( s) s2 + 7s + 2
G ( s) = = 3
U ( s ) s + 9 s 2 + 26 s + 24
Solution:

Draw the transfer function block diagram in cascade form

U ( s) 1 W (s) 2 Y (s)
s + 7s + 2
s 3 + 9 s 2 + 26s + 24

From this we have

s3W (s) = −9s2W (s) − 26sW (s) − 24W (s) + U (s) & Y (s) = s 2W ( s) + 7sW ( s) + 2W ( s)

or in time-domain

w = −9w − 26w − 24w + u & y = w + 7 w + 2w

The above time-domain equations yield the following simulation diagram

1
7
1 w s −1 w s −1 w s −1 w 2 1 y
u (t )
−9 x3 x2 x1
−26
−24

To obtain the state equation, the state variables x1 (t ) , x2 (t ) , and x3 (t ) are


assigned to the output of each integrator from the right to the left. Next an
equation is written for the input of each integrator. The results are

x1 = x2

x2 = x3

x3 = −24 x1 − 26 x2 − 9 x1 + u (t )

and the output equation is

y = 2 x1 + 7 x2 + x3

or in matrix form
 x1   0 1 0   x1  0 
x  =  0 0 1   x2  + 0  u (t )
 2 
 x3   −24 −26 −9   x3  1 

 x1 
y =  2 7 1  x2 
 x3 

Example 4: A system is represented by the following differential equation

d 2 c(t) dc(t)
5 2
+6 + 8c(t) = r(t)
dt dt

Write down it’s state equation, output equation and transfer function. Initial
conditions are zero

Solution:

Let

c(t) = x1
dc(t)
= x2
dt

x1 = x2 …………………………………………………(21)

d 2 c(t) 6 dc(t) 8 r (t)


2
=− − c(t) +
dt 5 dt 5 5

8 6 1
x2 = − x1 − x2 + u (t) ……………………………………………………(22)
5 5 5

From equation (21) and equation (22), we get

 0 1  0 
 x1     x1   
x  =  8 6  + 1 u
 2 − −   x2   
 5 5 5
 x1 
C = 1 0  
 x2 
5s 2C (s) + 6sC(s) + 8C(s) = U(s)
C (s) 1
= 2
U (s) 5s + 6 s + 8

Y(s) 2s 2 + 3s + 1
=
Example 5: Consider the system given by U(s) s3 + 5s 2 + 6s + 7 . Obtain state
space representation in the controllable canonical form and observable
canonical form.

Solution:

Controllable canonical form:

Comparing the given transfer function with equation (10), we get b0=0, b1=2,
b2=3, b3=1, a1=5, a2=6, a3=7.

Using equations (11) & (12), the state space representation in controllable
canonical form is given by

 x1   0 1 0   x1  0 
x  =  0 0 1   x2  +  0  u
 2 
 x3   − a3 − a2 − a1   x3  1 
 x1 
y = b3 − a3b0 b2 − a2b0 b1 − a1b0   x2  + b0u
 x3 

Thus, the state model in controllable canonical form for the given transfer
function is

 x1   0 1 0   x1  0 
x  =  0 0 1   x2  +  0  u
 2 
 x3   −7 −6 −5  x3  1 
 x1 
y = 1 3 2  x2 
 x3 

Observable canonical form:

Using equations (13) & (14), the state space representation in observable
canonical form is given by
 x1   0 0 − a3   x1  b3 − a3b0 
 x  =  1 0 − a   x  + b − a b  u
 2  2 2  2 2 0
 x3   0 1 − a1   x3  b1 − a1b0 
 x1 
y =  0 0 1  x2  + b0u
 x3 

Thus, the state model in observable canonical form for the given transfer
function is

 x1  0 0 −7   x1  1 
 x  = 1 0 −6   x  + 3  u
 2   2  
 x3  0 1 −5   x3   2 
 x1 
y =  0 0 1  x2 
 x3 

Example 6:

Obtain a state model of the system described by the transfer function given
below by diagonal canonical form

Y (s) s3 + 7 s 2 + 12s + 8
=
U (s) s 3 + 6s 2 + 11s + 6

Solution:

The order of the numerator and denominator polynomials is same.


Therefore, divide the numerator polynomials by denominator polynomials to
remove the constant term and then write in partial fraction form. We have

Y (s) s2 + s + 2
= 1+
U (s) ( s + 1)( s + 2 )( s + 3)
This transfer function can also be written in partial fraction expansion form
as

Y (s) 1 4 4
= 1+ − +
U (s) s +1 s + 2 s + 3

Compare the above transfer function with equation (15), we get b0=1, K1=1,
K2=-4, K3=4, p1=1, p2=2 and p3=3.
Using equations (16) & (17), the state space representation in diagonal
canonical form is given by

 x1   − p1 0 0   x1  1
x  =  0 − p2 0   x2  + 1 u
 2 
 x3   0 0 − p3   x3  1
 x1 
y =  K1 K2 K 3   x2  + b0u
 x3 

Thus, the state model in diagonal canonical form for the given transfer
function is

 x1   −1 0 0   x1  1
 x  =  0 −2 0   x  + 1 u
 2   2  
 x3   0 0 −3  x3  1
 x1 
y = 1 −4 4  x2  + u
 x3 

Example 7:

Obtain a state model of the system described by the transfer function given
below by Jordan canonical form

Y (s) 2s 2 + 6s + 5
= 3
U (s) s + 4s 2 + 5s + 2

Solution:

The given transfer function can be written in partial fraction expansion form
as

Y (s) 2s 2 + 6s + 5 2s 2 + 6s + 5
= 3 =
U (s) s + 4s 2 + 5s + 2 ( s + 1)2 ( s + 2 )
1 1 1
= + +
( s + 1) s +1 s + 2
2

Comparing this transfer function with equation (19), we get b0=0, K1=1,
K2=1, K3=1, p1=p2=1 and p3=2.

Using equations (20) & (21), the state space representation in Jordan
canonical form is given by
 x1   − p1 1 0   x1  0 
x  =  0 − p1 0   x2  + 1  u
 2 
 x3   0 0 − p3   x3  1 
 x1 
y =  K1 K2 K 3   x2  + b0u
 x3 

Thus, the state model in Jordan canonical form for the given transfer
function is

 x1   −1 1 0   x1  0 
 x  =  0 −1 0   x  + 1  u
 2   2  
 x3   0 0 −2   x3  1 
 x1 
y = 1 1 1  x2 
 x3 

5. Concept of Eigen Values and Eigen Vectors

5.1 Eigen Values


The roots of characteristic equation,
I − A = 0
  n + a1 n −1 + a2  n − 2 + + an = 0

are called as the eigen values of matrix A. Also, the roots of the
characteristic equation of the system in terms of s
sI − A = 0
 s n + a1s n −1 + a2 s n −2 + + an = 0

are same as the poles of the closed loop transfer function. Hence, eigen
values are the closed loop poles of the system.
Some important properties related to eigen values are given below-

1. Any square matrix A and its transpose AT have the same eigen values.
2. Sum of eigen values of any matrix A is equal to the trace of the matrix A.
3. Product of the eigen values of any matrix A is equal to the determinant of
the matrix A.
4. If we multiply a scalar quantity to matrix A then the eigen values are also
get multiplied by the same value of scalar.
5. If we inverse the given matrix A then its eigen values are also get inverses.
6. If all the elements of the matrix are real then the eigen values
corresponding to that matrix are either real or exists in complex conjugate
pair.

5.2 Eigen Vectors

Any non-zero vector 𝑚𝑖 that satisfies the matrix equation (𝜆𝑖𝐼−𝐴) 𝑚𝑖=0 is
called the eigen vector of A associated with the eigen value 𝜆𝑖,. Where 𝜆𝑖, i =
1, 2, 3, ……..n denotes the ith eigen values of A.

This eigen vector may be obtained by taking co-factors of matrix (𝜆𝑖𝐼−𝐴) along
any row & transposing that row of co-factors.

Mathematically, the eigen vector can be obtained by taking co-factors of


matrix ( i I − A) along any row.

 Ck 1 
C 
Therefore Mi= Eigen vector for i =   where k=1,2,…n
k2

 
 
Ckn 

Where Cki is co-factor of matrix ( i I − A) of kth row.

Note: If the cofactors along a particular row gives null solution i.e. all elements
of corresponding eigen vectors are zero then cofactors along any other row
must be obtained. Otherwise inverse of modal matrix M cannot exist.

6. Modal Matrix or Diagonalising Matrix, M

Let M1, M2, … , Mn are the eigen vectors of matrix A corresponding


to the eigen values 1 , 2 , …, n respectively. Each eigen vector is of
the order ( n 1) and placing all the eigen vectors one after another
as the columns of another matrix, ( n  n ) matrix can be obtained.
Such a matrix obtained by placing all the eigen vector together is
called a modal matrix or diagonalising matrix of matrix A.

Therefore, M= Modal Matrix =  M1 M2 Mn 


AM = A  M 1 M2 Mn 
=  AM 1 AM 2 AM n 
=  1M 1 2 M 2 n M n 
Thus, AM=M  ……………………………………(23)
1 0 0 0
0 2 0 0 

Where  =  0 0 3 0  =Diagonal matrix
 
 
 0 0 0 n 
Now, multiplying equation by M −1
M −1 AM = M −1M 
Therefore, Diagonal matrix=  = M −1 AM
Note: (i) Both A and  have same characteristic equation
(ii) Both A and  have same eigen values
(iii) Due to transformation M −1 AM , eigen values of matrix A remain
unchanged and matrix A gets converted to diagonal form.
6.1 Vander Monde Matrix

If the system matrix A is of the companion form i.e. in phase variable or


Bush form and if all its n eigen values are distinct, then the modal matrix
will be special matrix called the Vander Monde matrix.

 1 1 1 
  2 n 
 1
Vander Monde matrix, V =  1 2
22 n2  …………….(24)
 
 
1n −1 2n −1 nn −1 

Where i (i=1,2, … n) are the eigenvalues of the system matrix A.

6.2 Vander Monde Matrix for Multiple Eigen Values

If the system matrix A is in companion form involves multiple eigen values,


then a different method is adopted. If only r of its n eigenvalues are distinct
and remaining (n-r) eigenvalues are repeated, then the modal matrix is the
modified Vander Monde matrix as,

 1 1 1 1 0 0 
  2 r r +1 1 0 
 1 
 12
22 r2 r2+1 2r +1 1 
 
V =  13 23 r3 r3+1 3 2
r +1 3r +1  ……………….(25)
 
 
 n −1 d rn+−11 1 d 2 rn+−11 
1 2n −1 rn −1 rn+−11
d r +1 2! d r2+1 
 
Note: The generalised eigen vectors are the eigen vectors corresponding to
repeated eigen values. If out of n eigen values of matrix A, r are distinct, the
r eigen values can be determined in the usual manner by finding the
cofactors along any row of  j I − A , where i denotes the ith didtinct eigen
values, i=1, 2, …, r. If the remaining (n-r) eigen values are repeated, then the
jth eigen vector can be determined by taking cofactors along any row of
 j I − A . The remaining generalized eigenvectors can be determined by

taking the derivatives of the cofactors forming the jth eigen vector  j I − A as
given below.

M =  M1 M2 M3 Mr 
  1 dC j1   1 d 2 C j1   1 d r −1C j1  
    2   r −1  
 1! d  j   2! d  j   ( r − 1) ! d  j 
  C j1   1 dC j 2   d 2C   r −1 
 C    1 j2   1 d C j2 
=    1! d  j   2! d  j2   ( r − 1) ! d  jr −1  
j2

      
       
 C jn   1 dC jn   2   r −1 
    1 d C jn   1 d C jn  

 1! d  j   2! d  j2 
 
 ( r − 1) ! d  jr −1  
 

Where matrix M is called modal matrix which has been obtained by placing
all the eigen vectors one after the other.

6.3 Diagonalisation
The techniques used for transforming a general state model into a canonical
form or diagonal form are generally known as diagonalisation techniques.

Consider an nth order MIMO state model in which matrix A is not diagonal.

x = Ax + Bu
……………………………………(26)
y = Cx + Du

Let us define a new state vector z such that

x = Mz ………………………………….(27)

Where M is a ( n  n ) non-singular matrix.

With this transformation, the original state model becomes

z = M −1 AMz + M −1 Bu
……………………………………(28)
y = CMz + Du
The equation (28) results a canonical state model in which M −1 AM is a
diagonal matrix and matrix M is called diagonalising matrix or modal matrix.
Equation (28) can also be written as

z = z + Bu
…………………………………………(29)
y = Cz + Du

Where  = M −1 AM =Diagonal matrix

B = M −1B

and C = CM

Example 8:
Diagonalize the system matrix given below.

0 1 −1
A =  −6 −11 6 

 −6 −11 5 

Solution:

The eigen values of the system matrix A are the roots of the characteristic
equation

1 0 0   0 1 −1
 I − A =  0 1 0  −  −6 −11 6  = 0
  
0 0 1   −6 −11 5 
 −1 1
 6  + 11 −6 = 0
6 11  −5
  3 + 6 2 + 11 + 6 = 0
 (  + 1)(  + 2 )(  + 3) = 0

Therefore, the eigen values are 1 = −1 , 2 = −2 and 3 = −3

 − 1 −1 1  C11  6  1 
 
For 1 ,  1 I − A =  6 10 −6  , M 1 = C12  = 0  = 0 
 
 6 11 −6  C13  6  1 
 −2 −1 1  C11  3  1 
 
For 2 ,  2 I − A =  6 9 −6  , M 2 = C12  = 6  =  2 
 
 6 11 −7  C13  12   4 

 −3 −1 1  C11   2  1 
 
For 3 ,  3 I − A =  6 8 −6  , M 3 = C12  = 12  = 6 
 
 6 11 −8 C13  18  9 

1 1 1 
Therefore, M =  M 1 M2 M 3  = 0 2 6 
1 4 9 

 3 2.5 −2 
Adj  M 
M −1
= =  −3 −4 3 
M
 1 1.5 −1

The diagonal matrix is given by

 3 2.5 −2   0 1 −1 1 1 1 
 = M AM =  −3 −4 3   −6 −11 6  0 2 6 

−1  
 1 1.5 −1  −6 −11 5  1 4 9 
 −1 0 0 
=  0 −2 0 
 0 0 −3

Example 9:
Reduce the system matrix A to diagonal matrix

0 1 0
A =  0 0 1 
 −6 −11 −6 

Solution:

The eigen values of matrix A can be obtained as


1 0 0   0 1 0
  
 I − A =  0 1 0 −  0 0 1  = 0
0 0 1   −6 −11 −6 
 −1 0
0  −1 = 0
6 11  + 6
  3 + 6 2 + 11 + 6 = 0
 (  + 1)(  + 2 )(  + 3) = 0

Therefore, the eigen values are 1 = −1 , 2 = −2 and 3 = −3

Since the matrix A is in companion form and has distinct eigen values, the
modal matrix M can be written directly in Vander Monde form as

1 1 1 1 1 1
M =  1 2 3  =  −1 −2 −3
12 22 32   1 4 9 

6 5 1
Adj  M 
=  −6 −8 −2 
−1 1
M =
M 2
 2 3 1 

The diagonal matrix is given by

 6 5 1  0 1 0  1 1 1 
1  
−1
 = M AM =  −6 −8 −2   0 0 1   −1 −2 −3
2
 2 3 1   −6 −11 −6   1 4 9 
 −1 0 0 
=  0 −2 0 
 0 0 −3

Example 10:
Reduce the system matrix A to diagonal matrix

0 1 0
A =  0 0 1 
 −2 −5 −4 

Solution:

The eigen values of matrix A can be obtained as


1 0 0   0 1 0 
 I − A =  0 1 0  −  0 0 1  = 0
0 0 1   −2 −5 −4 
 −1 0
0  −1 = 0
2 5 +4
  3 + 4 2 + 5 + 2 = 0
 (  + 1)(  + 1)(  + 2 ) = 0

Therefore, the eigen values are 1 = −1 , 2 = −1 and 3 = −2

Since the system matrix A is in companion form and the two eigen values
repeated, the modal matrix M can be obtained directly from modified Vander
Monde form as

 d (1) 
1 1
 d 1  1 0 1 1 0 1
 d 1 
M =  1 2  =  1 1 2  =  −1 1 −2 
 d 1   2
   1 21 22   1 −2 4 
d 12
12 2 
2

 d 1 

 0 −2 −1
Adj  M 
M −1
= =  2 3 1 
M
1 2 1 

The diagonal matrix is given by

0 −2 −1  0 1 0   1 0 1 
 = M −1 AM =  2 3 1   0 0 1   −1 1 −2 
1 2 1   −2 −5 −4   1 −2 4 
 −1 1 0
=  0 −1 0 
 0 0 −2 

Example 11:
Obtain the state model for the following transfer function

Y (s) s 2 + 3s + 9
=
U ( s ) 5s 5 + 8s 4 + 24s 3 + 34s 2 + 23s + 6

Solution:

Y (s) s 2 + 3s + 9
=
U ( s ) 5s 5 + 8s 4 + 24s 3 + 34s 2 + 23s + 6
3.5 − 4.75 5.875 7 1.125
Y (s) = + + − +
( s + 1)3 ( s + 1) 2 s + 1 s + 2 s + 3

 x1  − 1 1 0 0 0   x1  0
 x   0 − 1 1 0 0   x 2  0 
 2     
 x3  =  0 0 −1 0 0   x3  + 1u (t )
      
 x 4   0 0 0 − 2 0   x 4  1 
 x5   0 0 0 0 − 3  x5  1

 x1 
x 
 2
y (t ) = 3.5 − 4.75 5.875 − 7 1.125 x3 
 
 x4 
 x5 

7. Transfer Function from the State Model


The state model of a system is given by

x = Ax + Bu …………………………………………..(30)

and y = Cx + Du …………………………………………..(31)

Taking Laplace transform on both sides of equations (30) and (31), we get

sX (s) − x(0) = AX(s) + BU(s) …………………………………….(32)

Y (s) = CX(s) + DU(s) …………………………………… (33)

The initial conditions are considered as zero i.e. x(0)=0, as transfer function
to be determined. Therefore,

sX (s) = AX(s) + BU(s)


X(s)  sI − A = BU (s)
X(s) =  sI − A BU (s)
−1

Substituting X(s) in equation (33), we get

Y (s) = C  sI − A BU (s) + DU(s)


−1

Y(s) = C  sI − A B + D  U (s)
−1

 

Therefore, the transfer function

Y (s)
= C  sI − A B + D
−1
…………………………………………(34)
U (s)

If D is null matrix, the equation (34) becomes

Y (s)
= C  sI − A B
−1
………………………………………..(35)
U (s)

8. Solution of State Equation


8.1 Solution of Homogeneous State Equation

A state equation is homogeneous if the input to the system is zero.

We know that

dx(t)
= Ax(t) + Bu(t) …………………………………….(36)
dt

For homogeneous state equation if u(t)=0, then


dx(t)
= Ax(t) ………………………………(37)
dt

Taking Laplace transform on both sides of equation (37), we get

sX (s) − x(0) = AX(s)


X(s)  sI − A = x(0)
………………………………………..(38)
X(s) =  sI − A x(0)
−1

=  (s) x(0)

Where I is identity matrix

 (s) =  sI − A
−1

Taking the inverse Laplace transform on both sides of equation (38) yields

x(t) = L−1 (s) x(0)


………………………………………………(39)
=  (t) x(0)

From equation (39), the state transition matrix is as

 (t) = L−1 (s) = L−1  sI − A


−1
…………………………..(40)

The solution of homogeneous state equation in time domain is given by

x(t) = e At x(0) …………………………………………..(41)

Comparing equation (39) and equation (41), we get

 (t) = e At ………………………………………(42)

The term e At can be expressed in infinite series form as

A2 t 2 A3 t 3 n
Ai t i
e At = I + At + + + = ………….(43)
2! 3! i =0 i!

8.1.1 Properties of State Transition Matrix


(i)  (0) = I

We know that  (t) = e At

Put t=0,

 (0) = e A (0) = e0 = I

(ii)  (−t ) =  −1 ( t )
 ( t ) = e At
 ( t ) e − At = e At e − At
 −1 ( t )  ( t ) e − At =  −1 ( t ) I
Ie − At =  −1 ( t ) I
e − At =  −1 ( t )
 (−t ) =  −1 ( t ) = e − At

(iii)  ( t1 + t2 ) =  ( t1 )  ( t2 ) =  ( t2 )  ( t1 )

 ( t1 + t2 ) = e A(t +t )
1 2

= e At1 e At2 =  ( t1 )  ( t2 ) =  ( t2 )  ( t1 )
(iv)  ( t2 − t1 )  ( t1 − t0 ) =  ( t2 − t0 )

 ( t2 − t1 )  ( t1 − t0 ) = e A(t −t )e A(t −t )
2 1 1 0

A ( t2 −t0 )
=e
=  ( t 2 − t0 )

 ( t )  =  ( Kt ) where K=Positive integer


K
(v)

 ( t )  = e At e At
K
e At (K terms)

 ( t )  = e KAt =  ( Kt )
K

 ( t ) =  ( −t )
−1
(vi)

 ( −t ) = e− At

−1 −1
 ( −t )  = e − At 
= e At =  ( t )

Note:
State transition matrix represents the free response of the system as it satisfies the
homogeneous state equation. In other words, it governs the response that is excited
by the initial conditions only. This matrix is dependent only upon the state matrix
A. The state-transition matrix is completely defined as the transition of the states
from the initial time t = 0 to any time t when the inputs are zero.
8.2 Solution of Non-homogeneous state equation

The time response of a system represented by state equation on application


of input function u along with given initial conditions can be considered in
two parts:

(i) Zero input response (ZIR) wherein only initial conditions are
considered and input function given by u is zero.
(ii) Zero state response (ZSR) wherein only input function u is
considered and initial conditions are zero.

The state equation of a dynamic system is given by

dx(t)
= Ax ( t ) + Bu ( t ) ………………………………(44)
dt

Taking Laplace transform on both sides of equation (44), we get

sX ( s ) − x(0) = AX ( s ) + BU ( s )
X (s)  sI − A = x(0) + BU ( s )
X ( s ) =  sI − A x(0) +  sI − A BU ( s )
−1 −1

X ( s ) =  ( s ) x(0) +  ( s ) BU ( s ) ……………………………………..(45)

Taking inverse Laplace transform of equation (45) on both sides yields

x ( t ) =  ( t ) x ( 0 ) + L−1 ( s ) BU ( s ) …………………………………….(46)

The equation (46) represents the time response of the state equation (44).
The first term on the RHS of equation (46) is zero input response and the
second term is zero state response.

Example 12:
A system is described by the following state-space equations

 x1   0 1   x1  0
 x  =  −6 −5  x  + 1  u (t )
 2   2  

x 
y = [8 1]  1 
 x2 
Obtain the system transfer function
Solution:

 s + 5 1
 s −1   6 s 
[ SI − A] =     ( s) = [ SI − A]−1 =  2
6 s + 5 s + 5s + 6

 s + 5 1  0  1
 6    [8 1]  
G( s) = C[ SI − A]−1 B = [8 1]  2
s  1 
= 2 s = 8 + s
s + 5s + 6 s + 5s + 6 s 2 + 5s + 6

Therefore
s +8
G( s) =
s 2 + 5s + 6

Example 13:
Determine the time response of a system given below:

 x1   0 1   x1 
 x  =  −2 −3  x 
 2   2
x(0) = 1 1
T

 x1 
and y = 1 −1  
 x2 

Solution:

 s −1 
 sI − A =  
 2 s + 3
 ( s ) =  sI − A
−1

−1
 s −1 
= 
 2 s + 3
 s −1 
Adj  
 (s) =  2 s + 3
s −1
2 s+3
 s + 3 1  s + 3 1 
 −2 s   2
 = s + 3s + 2 s + 3s + 2 
= 2
2
 
s + 3s + 2  −2 s 
 s 2 + 3s + 2 s + 3s + 2 
2

 s+3 1 
 ( s + 1)( s + 2 ) ( s + 1)( s + 2 ) 
  (s) = 
 −2 s 

 ( s + 1)( s + 2 ) ( s + 1)( s + 2 ) 

Now, applying the partial fraction expansion, we have

 2 1 1 1 
 s +1 − s + 2 −
s +1 s + 2 
 (s) =  
 −2 + 2 −1
+
2 
 s + 1 s + 2 s + 1 s + 2 

The state transition matrix is given by,

 ( t ) = L−1 ( s )
 2 1 1 1 
 − +
s +1 s + 2 s +1 s + 2 
= L−1  
 −2 + 2 −1
+
2 
 s + 1 s + 2 s + 1 s + 2 
 2e −t − e −2t e −t + e −2t 
= −t −2 t 
 −2e + 2e −e −t + 2e −2t 

Now, the time response is given by,

x (t ) =  (t ) x ( 0)
 x1   2e −t − e −2t e −t + e −2t  1
  = −t −2 t  
 x2   −2e + 2e −e −t + 2e −2t  1
x1 = 2e− t − e−2t + e − t + e −2t = 3e − t
x2 = −2e −t + 2e −2t − e −t + 2e −2t = −3e −t + 4e −2t
 x1 
y = 1 −1  
 x2 
= x1 − x2
= 3e − t − ( −3e − t + 4e −2t )
 y = 6e − t − 4e −2t

9. Concepts of Controllability and Observability


The concepts of controllability and observability were introduced by
Kalman in 1960 and very important in the control of multivariable
systems.
9.1 Controllability

A system is said to be completely state controllable if it is possible to transfer


the system state from any initial state x(t0) to any desired state x(t) in a
specified finite time by a control vector u(t).

Kalman’s test for Controllability

Consider nth order multi input LTI system with m dimensional control vector

x = Ax + Bu

is completely controllable if & only if the rank of the composite matrix Qc


Qc =  B AB A2 B An−1B 

is n.

where Qc =Controllability matrix

If Qc = 0 , the system is not controllable.

If Qc  0 , the system is controllable.

9.2 Observability

A system is said to be completely observable, if every state x(t0) can be


completely identified by measurements of the outputs y(t) over a finite time
interval.

Kalman’s test for Observability

Consider nth order multi input LTI system with m dimensional output vector
x = Ax + Bu
y = Cx

is completely observable if & only if the rank of the observability matrix Qo

Qo = C T (A ) (A ) C T 
2 n −1
AT C T T
CT T
 

is n.

where Qo= Observability matrix

CT=Transpose of matrix C

AT= Transpose of matrix A

If Qo = 0 , the system is not observable.

If Qo  0 , the system is observable.

Principle of Duality: It gives relationship between controllability &


observability.
❖ The Pair (AB) is controllable implies that the pair (ATBT) is observable.
❖ The pair (AC) is observable implies that the pair (ATCT) is controllable.

Example 14:
Consider the system shown below

 x1   −4 −1  x1  1
 x  =  3 −1  x  + 1 u
 2   2  
 x1 
y = 1 0  
 x2 

Obtain the transfer function of the system and the state transition matrix.

Solution:

The transfer function of the system is given by

Y (s)
= C (s) B+ D = C  (s) B
U (s) [ D = 0]
−1
  s 0   −4 −1 
 ( s ) =  sI − A
−1
=  − 
 0 s   3 −1 
−1
s + 4 1 
= 
 −3 s + 1
 s +1 −1 
 s 2 + 5s + 7 s 2 + 5s + 7 
= 
 3 s+4 
 s 2 + 5s + 7 s 2 + 5s + 7 

Therefore, the transfer function of the system is

Y (s)
= C  (s) B
U (s)
 s +1 −1 
 s 2 + 5s + 7 s 2 + 5s + 7  1
= 1 0  
 3 s + 4  1
 s 2 + 5s + 7 s 2 + 5s + 7 
 s 
 s 2 + 5s + 7  s
= 1 0  = 2
 s + 7  s + 5s + 7
 s 2 + 5s + 7 

The state transition matrix is

 ( t ) = L−1  sI − A
−1

 s +1 −1 
 s 2 + 5s + 7 s + 5s + 7 
2
= L−1  
 3 s+4 
 s 2 + 5s + 7 s 2 + 5s + 7 
 −2.5t 2 −2.5t 
e cos 23 t − 3e −2.5t sin 2
3
t − e sin 2
3
t 
= 3

 2 3e −2.5t sin 23 t e −2.5 t
cos 2 t + 3e
3 −2.5 t 3 
sin 2 t 

Note: To check the answer, put t=0,  ( t ) should be identity matrix i.e.
1 0 
 ( 0) =  
0 1 

Example 15:

Obtain State transition matrix of the following system


x1 = x2
x2 = −2 x1 − 3 x2

Solution:

The given state equations can be written as

 x1   0 1   x1 
 x  =  −2 −3  x 
 2   2

 ( s ) =  sI − A
−1

−1
 s −1 
= 
 2 s + 3
 s+3 1 
 s 2 + 3s + 2 s + 3s + 2 
2
= 
 −2 s 
 s + 3s + 2
2
s + 3s + 2 
2

 ( t ) = L−1 ( s )
 2e−1.5t cosh 12 t + 3e−1.5t sinh 12 t 2e −1.5t sinh 12 t 
= 
 −4e−1.5t sinh 12 t 2e −1.5t
cosh 12 t − 3e −1.5t
sinh 12 t 

Example 16:
Check the controllability and observability for the following system

0 1 1 
x=  x +  u
 −1 −2   −1
y = 1 1 x

Solution:

For controllability:
0 1
A= 
 −1 −2 
1 
B= 
 −1
 0 1  1   −1
AB =    =  
 −1 −2   −1 1 
 1 −1
 Qc =  B AB  =  
 −1 1 
 Qc = 0

Thus, the system is not controllable.

For observability:

1
CT =  
1
 0 −1 1  −1
AT C T =    =  
1 −2  1  −1
1 −1
 Qo = C T AT C T   
1 −1
 Qo = 0

Thus, the system is not observable.

Example 17:
Check the controllability and observability of the system described by

 −3 1 1 0 1 
x =  −1 0 1 x +  0 0  u

 0 0 1  2 1 
0 0 1
y=
1 0 
x
1

Solution:

Controllability:
0 1 
B =  0 0 
 2 1 
 −3 1 1  0 1   2 −2 
AB =  −1 0 1  0 0  =  2 0 
 0 0 1  2 1   2 1 
 −3 1 1  2 −2   −2 −5
A B =  −1 0
2
1  2 0  =  0 3 
 0 0 1  2 1   0 1 
Qc =  B AB A2 B 
 0 1 2 −2 −2 −5
Qc =  0 0 2 0 0 3 
 2 1 2 1 2 1 

We need to form a square matrix by selecting any three columns from Qc


such that it’s determinant is non-zero.

0 1 2
Now, the required matrix as  0 0 2  and its determinant is 4 (non-zero).
 2 1 2 

Since the determinant of the matrix is not zero, the system is controllable
and the rank is 3.

Observability:
 −3 1 1
A =  −1 0 1
 0 0 1
 −3 −1 0 
A =  1 0 0 
T

 1 1 1 
0 0 1
C=
1 1 0 
0 1
C = 0
T
1 
1 0 
 −3 −1 0  0 1   0 −4 
A C =  1 0 0  0 1  = 0 1 
T T

 1 1 1  1 0  1 2 

(A ) C T = AT ( AT C T )
T 2

 −3 −1 0  0 −4  0 11 
=  1 0 0  0 1  = 0 −4 
 1 1 1  1 2  1 −1

Qo = C T (A )
C T 
2
AT C T T
 
0 1 0 −4 0 11 
= 0 1 0 1 0 −4 
1 0 1 2 1 −1

 −4 0 11 
Matrix  1 0 −4  has been chosen. Its determinant is -5.
 2 1 −1

Since the determinant is non-zero, rank is 3 and hence the system is


completely observable.

Note:

If state matrix A is a diagonal matrix then

(i) The system is controllable if the input matrix B contains any non-
zero element
(ii) The system is observable if the output matrix C contains any non-
zero element.
REVIEW QUESTIONS
Q.1 Obtain the solution of a state model and hence define state transition

matrix

Q.2 What is state transition matrix? State the properties of state transition

matrix.

Q.3 What is modal matrix?

Q.4 Derive the expression for transfer function from state model.

Q.5 Explain the Laplace transform method for finding the state transition

matrix

Q.6 Obtain the solution of homogeneous and non-homogeneous state

equation

Q.7 Define controllability and observability. Explain both terms with the

help of Kalman’s test.

You might also like