0% found this document useful (0 votes)
87 views43 pages

ECE504: Lecture 5: D. Richard Brown III

This lecture covered the existence and uniqueness of solutions to state equations for continuous-time linear systems. It was proved that for any initial condition x(t0) and input u(t), there exists a unique solution x(t) to the state equation for all time t. The state transition matrix Φ(t,s) was introduced, which satisfies its own differential equation and can be used to write the solution to the state equation. Two methods for finding Φ(t,s) were discussed: the Peano-Baker series and the fundamental matrix method.

Uploaded by

hamza malik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
87 views43 pages

ECE504: Lecture 5: D. Richard Brown III

This lecture covered the existence and uniqueness of solutions to state equations for continuous-time linear systems. It was proved that for any initial condition x(t0) and input u(t), there exists a unique solution x(t) to the state equation for all time t. The state transition matrix Φ(t,s) was introduced, which satisfies its own differential equation and can be used to write the solution to the state equation. Two methods for finding Φ(t,s) were discussed: the Peano-Baker series and the fundamental matrix method.

Uploaded by

hamza malik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

ECE504: Lecture 5

ECE504: Lecture 5

D. Richard Brown III

Worcester Polytechnic Institute

30-Sep-2008

Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 1 / 43


ECE504: Lecture 5

Lecture 5 Major Topics


We are still in Part II of ECE504: Quantitative and qualitative
analysis of systems
mathematical description → results about behavior of system
Today:
1. Existence and uniqueness of solutions to state equations for
continuous-time systems
2. Linear algebraic tools that we are going to need for analysis of
Ak and exp{At}.
◮ Subspaces

◮ Nullspace and range

◮ Rank

◮ Matrix invertibility

You should be reading Chen Chapter 4 now (and referring back to


Chapter 3 for the necessary linear algebra).
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 2 / 43
ECE504: Lecture 5

Continuous-Time Linear Systems

ẋ(t) = A(t)x(t) + B(t)u(t) (1)


y(t) = C(t)x(t) + D(t)u(t) (2)

Theorem
For any t0 ∈ R, any x(t0 ) ∈ Rn , and any u(t) ∈ Rp for all t ≥ t0 , there
exists a unique solution x(t) for all t ∈ R to the state-update differential
equation (1). It is given as
Z t
x(t) = Φ(t, t0 )x(t0 ) + Φ(t, τ )B(τ )u(τ ) dτ t ∈ R
t0

where Φ(t, s) : R2 7→ Rn×n is the unique function satisfying

d
Φ(t, s) = A(t)Φ(t, s) with Φ(s, s) = I n .
dt

Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 3 / 43


ECE504: Lecture 5

Theorem Remarks
◮ Note that this theorem claims two things:
1. A solution to the state-update equation always exists.
2. The solution is unique.
◮ Why is this important?
◮ Not every differential equation has a solution, e.g.
1
ẋ(t) = with x(0) = 5
t
◮ Not every differential equation has a unique solution

ẋ(t) = 3(x(t))2/3 with x(0) = 0

◮ We have already established uniqueness for the vector/matrix state


update differential equation ẋ(t) = A(t)x(t) + B(t)u(t) with initial
condition x(t0 ).
◮ We still need to establish existence.
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 4 / 43
ECE504: Lecture 5

Theorem: Existence Proof Warmup #1

To develop some intuition, let’s first assume that everything is scalar,


i.e. p = q = n = 1. Our state update equation becomes

ẋ(t) = a(t)x(t) + b(t)u(t)

Let
Z t 
φ(t, s) := exp a(τ ) dτ
s

What is φ(s, s)?

d
What is dt φ(t, s)?

Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 5 / 43


ECE504: Lecture 5

Theorem: Existence Proof Warmup #1


nR o
t
Note that φ(t, s) = exp s a(τ ) dτ always exists and satisfies its own
differential equation:
d
φ(t, s) = a(t)φ(t, s) with φ(s, s) = 1.
dt
Now lets try the following solution to the scalar state-update differential
equation with initial state condition x(t0 ):
Z t
x(t) = φ(t, t0 )x(t0 ) + φ(t, τ )b(τ )u(τ ) dτ ∀t ∈ R
t0

To see that this is indeed a solution, we need to confirm two things:


1. Does our solution satisfy the initial condition requirement of the
scalar state-update DE?
2. Does our solution really solve the scalar state-update DE?
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 6 / 43
ECE504: Lecture 5

Theorem: Existence Proof Warmup #2

To develop additional intuition, let’s now assume that everything is


time-invariant, i.e. A(t) ≡ A and B(t) ≡ B. Our state update equation
becomes

ẋ(t) = Ax(t) + Bu(t)

Let

X 1
Φ(t, s) := Ak (t − s)k
k!
k=0

What is Φ(s, s)?

d
What is dt Φ(t, s)?

Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 7 / 43


ECE504: Lecture 5

Theorem: Existence Proof Warmup #2


P
Note that Φ(t, s) = ∞ k 1 k
k=0 A k! (t − s) exists for any A ∈ R
n×n , t ∈ R,

and s ∈ R. Moreover, Φ(t, s) satisfies its own differential equation:

d
Φ(t, s) = AΦ(t, s) with Φ(s, s) = I n .
dt
Now lets try the following solution to the scalar state-update differential
equation with initial state condition x(t0 ):
Z t
x(t) = Φ(t, t0 )x(t0 ) + Φ(t, τ )B(τ )u(τ ) dτ ∀t ∈ R
t0

To see that this is indeed a solution, we need to confirm two things:


1. Does our solution satisfy the initial condition requirement of the
scalar state-update DE?
2. Does our solution really solve the scalar state-update DE?
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 8 / 43
ECE504: Lecture 5

Theorem: Existence Proof for General Case

For the general (non-scalar, time-varying) case, we propose the solution


Z t
x(t) = Φ(t, t0 )x(t0 ) + Φ(t, τ )B(τ )u(τ ) dτ (3)
t0

where the state transition matrix satisfies the matrix differential equation
d
Φ(t, s) = A(t)Φ(t, s) with Φ(s, s) = I n . (4)
dt
Note that (4) is consistent with our two warmup cases.

To complete the existence proof, we need to:


1. Show that (3) with Φ(t, s) defined according to (4) satisfies the
initial condition requirement of the state-update DE.
2. Show that (3) with Φ(t, s) defined according to (4) is indeed a
solution to the state-update DE.
3. Show that there always exists a solution to the matrix DE (4).
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 9 / 43
ECE504: Lecture 5

Theorem: Existence Proof for General Case: Part 1

Show that (3) with Φ(t, s) defined according to (4) satisfies the initial
condition requirement of the state-update DE.

Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 10 / 43


ECE504: Lecture 5

Theorem: Existence Proof for General Case: Part 2

Show that (3) with Φ(t, s) defined according to (4) is indeed a solution to
the state-update DE.

Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 11 / 43


ECE504: Lecture 5

Theorem: Existence Proof for General Case: Part 3

Show that there always exists a solution to the matrix DE (4).

Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 12 / 43


ECE504: Lecture 5

Peano-Baker Series Example

 
0 0
A(t) =
t 0

Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 13 / 43


ECE504: Lecture 5

Fundamental Matrix Method

While the Peano-Baker series establishes existence (and thus concludes the
proof of the existence and uniqueness theorem), it is sometimes easier to
find Φ(t, s) via the “fundamental matrix method” (Chen section 4.5).

Basic idea:
1. Consider the the continuous time DE with x(t) ∈ Rn

ẋ(t) = A(t)x(t) (5)

2. Choose n different initial conditions x1 (t0 ), . . . , xn (t0 ). These n


initial condition vectors must be linearly independent.
3. These n different initial conditions lead to n different solutions to (5).
Call these solutions x1 (t), . . . , xn (t) and put them into a matrix
X(t) = [x1 (t), . . . , xn (t)] ∈ Rn×n .
4. Note that Ẋ(t) = A(t)X(t). The quantity X(t) is called a
fundamental matrix of (5). Is the fundamental matrix unique?
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 14 / 43
ECE504: Lecture 5

Fundamental Matrix Method

Let X(t) be any fundamental matrix of (5). Note that X(t) is invertible
for all t (see Chen p. 107). The state transition matrix Φ(t, s) can then be
computed as

Φ(t, s) = X(t)X −1 (s).

Check:

Φ(s, s) =

d
Φ(t, s) =
dt

Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 15 / 43


ECE504: Lecture 5

Fundamental Matrix Example

 
0 0
A(t) =
t 0

Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 16 / 43


ECE504: Lecture 5

Remarks on the CT State-Transition Matrix Φ(t, s)

1. There are many ways to compute Φ(t, s). Some are easier than
others, but computing Φ(t, s) is almost always difficult.
2. Do different methods for computing Φ(t, s) lead to different
solutions?
3. Unlike the DT-STM Φ[k, j], the CT-STM Φ(t, s) is defined for any
(t, s) ∈ R2 . This means that we can specify an initial state x(t0 ) and
compute the system response at times prior to t0 .
4. It is easy to show that Φ(t, s) possesses the semi-group property, i.e.

Φ(t, τ ) = Φ(t, s)Φ(s, τ )

for any (t, τ, s) ∈ R3 from the fundamental matrix formulation:

Φ(t, τ ) = Φ(t, s)Φ(s, τ ) = X(t)X −1 (s)X(s)X −1 (τ ) = X(t)X −1 (τ )

Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 17 / 43


ECE504: Lecture 5

Important Special Case: A(t) ≡ A

When A(t) ≡ A, the state-transition matrix Peano-Baker series becomes



X
Φ(t, s) = M k (t, s)
k=0
∞ Z tZ
X τ1 Z τk−1
= ··· AA · · · A} dτk · · · dτ1
| {z
k=0 s s s
k−fold product

X Z tZ τ1 Z τk−1
= Ak ··· dτk · · · dτ1
k=0 s s s

To compute M k (t, s), let’s look at k = 0, 1, 2, . . . to see the pattern...

Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 18 / 43


ECE504: Lecture 5

Important Special Case: A(t) ≡ A

By induction, we can show that


1
M k (t, s) = Ak (t − s)k
k!
hence

X 1
Φ(t, s) = Ak (t − s)k
k!
k=0

which is consistent with our earlier result (warmup #2).

Suppose, for x ∈ C, we have



X xk
f (x) =
k!
k=0

What is f (x)?
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 19 / 43
ECE504: Lecture 5

Matrix Exponential

Definition (Matrix Exponential)


Given W ∈ Cn×n , the matrix exponential is defined as

X Wk
exp(W ) =
k!
k=0

Note that the matrix exponential is not performed element-by-element, i.e.


   w 
w11 w12 e 11 ew12
exp 6= w21 w22
w21 w22 e e

Matlab has a special function (expm) that computes matrix exponentials.


Calling exp(W) will not give the same results as expm(W).

Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 20 / 43


ECE504: Lecture 5

Important Special Case: A(t) ≡ A

Putting it all together, when A(t) ≡ A, we can say that



X 1
Φ(t, s) = Ak (t − s)k = exp {(t − s)A}
k!
k=0

Then the solution to the LTI continuous-time state-update DE is


Z t
x(t) = exp {(t − t0 )A} x(t0 ) + exp {(t − τ )A} B(τ )u(τ ) dτ
t0

and the output equation is


Z t
y(t) = C(t) exp {(t − t0 )A} x(t0 ) + C(t) exp {(t − τ )A} B(τ )u(τ ) dτ + D(t)u(t)
t0

Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 21 / 43


ECE504: Lecture 5

Contrast/Comparison Between CT and DT Solutions

Similarities
◮ CT and DT solutions have same “look”.
◮ CT and DT solutions have state transition matrices with same
intuitive properties, e.g. semigroup.

Differences
◮ In DT systems, x[k] is only defined for k ≥ k0 because the DT-STM
Φ[k, k0 ] is only defined for k ≥ k0 .
◮ In CT systems, x(t) is only defined for all t ∈ R because the CT-STM
Φ(t, t0 ) is defined for all (t, t0 ) ∈ R2 .
◮ We didn’t prove this, but the CT-STM Φ(t, t0 ) is always invertible.
This is not true of the DT-STM Φ[k, k0 ].

Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 22 / 43


ECE504: Lecture 5

What We Know
◮ We know how to solve discrete-time LTV and LTI systems.
“Solve” means “write an analytical expression for x[k] and y[k]
given A[k], B[k], C[k], D[k], and x[k0 ]”.
◮ We know that solutions must exist and must be unique.

◮ We know how to solve continuous-time LTV and LTI systems.


“Solve” means “write an analytical expression for x(t) and y(t)
given A(t), B(t), C(t), D(t), and x(t0 )”.
◮ We know that solutions must exist and must be unique.
◮ We also know two ways to compute the state transition matrix.

◮ We know some of the properties of state transition matrices.


◮ We know differences between the DT-STM and the CT-STM.
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 23 / 43
ECE504: Lecture 5

Where We Are Heading

Our focus is going to shift primarily to LTI systems for a little while.

Recall that, when A is not a function of time, the state transition


matrices become

Φ[k, j] = Ak−j (discrete time)


Φ(t, s) = exp{(t − s)A} (continuous time)

We would like to be able to better analyze these matrix functions in


order to, for example, efficiently compute Ak−j .

We are going to need to learn some more linear algebra first...

Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 24 / 43


ECE504: Lecture 5

Sets and Subspaces


Let A and B be sets.
◮ A ⊂ B means that all elements of the set A are also in the set B.
◮ x ∈ A to mean that x is an element of the set A.
◮ A ⊂ B and x ∈ A implies that x ∈ B.

Definition
S ⊂ Rn is a subspace if and only if S is closed under addition and scalar
multiplication, i.e.

x ∈ S and y ∈ S ⇒ x+y ∈S
and
x ∈ S and α ∈ R ⇒ αx ∈ S.

Note that subspaces must always include the zero vector.


Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 25 / 43
ECE504: Lecture 5

Spanning Set of a Subspace

Definition
A spanning set for the subspace S ⊂ Rn is a set of vectors s1 , . . . sp , each
in S, such that every element of S can be expressed as a linear
combination of the vectors s1 , . . . sp , i.e.

x∈S ⇒ there exists α1 , . . . , αp such that x = α1 s1 + · · · + αp sp

where αi ∈ R for i = 1, . . . , p.
Example: Suppose S is the xy plane in R3 . Which of the following are
spanning sets?
82 39 82 3 2 39 82 3 2 3 2 39 82 3 2 3 9 82 3 2 3 2 39
< 1 = < 1 0 = < 1 0 0 = < 1 1 = < 1 1 0 =
405 or 405 , 415 or 405 , 415 , 405 or 405 , 415 or 405 , 415 , 415
0 0 0 0 0 1 0 0 0 0 0
: ; : ; : ; : ; : ;

Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 26 / 43


ECE504: Lecture 5

Some Facts About Subspaces, Spanning Sets, and Bases

1. Every S ⊂ Rn possesses a linearly independent spanning set. Such a


set is called a basis for S. This basis is not unique, of course.
2. The number of vectors in any basis for S is the same. This number is
called the dimension of S. We use the notation dim(S) to denote
the dimension of a subspace.
3. If S is a subspace of Rn , then dim(S) ≤ n with equality if and only if
S = Rn .
4. Any spanning set for S contains at least dim(S) vectors.
5. Any set with elements from S containing more than dim(S) vectors is
linearly dependent.
6. A basis is a minimally-sized spanning set of S.
7. A basis is a maximally-sized linear independent set of vectors in S.

Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 27 / 43


ECE504: Lecture 5

Nullspace and Range

Given W ∈ Rm×n (not necessarily square), there are two important


subspaces related to this matrix.
Definition
The nullspace of W is defined as the set of all x ∈ Rn such that
W x = 0. We denote this subspace of Rn as null(W ).

Definition
The range of W is defined as the set of all y ∈ Rm such that there exists
an x satisfying W x = y. We denote this subspace of Rm as range(W ).

The range is also sometimes called the “column space” because it is the
subspace generated by linear combinations of the columns of W .

Note that both subspaces always include the zero vector.

Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 28 / 43


ECE504: Lecture 5

Nullspace

The matrix W ∈ Rm×n maps vectors from Rn to Rm . The nullspace of


W is a subspace of Rn .

Rn Rm

null

Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 29 / 43


ECE504: Lecture 5

Range

The matrix W ∈ Rm×n maps vectors from Rn to Rm . The range of W is


a subspace of Rm .
Rn Rm

range

Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 30 / 43


ECE504: Lecture 5

Nullspace and Range Examples

Suppose
 
1 1
W = 1 1 (6)
1 1
 
1 1 1
W = (7)
1 1 1
 
1 1
W = (8)
0 1

What is the nullspace and range of W in each case?

Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 31 / 43


ECE504: Lecture 5

Existence and Uniqueness of Solutions to Ax = b

For any A ∈ Rm×n and b ∈ Rm , a solution to Ax = b


exists if and only if b ∈ range(A).

For any A ∈ Rm×n and b ∈ Rm , a solution to Ax = b is


unique if and only if dim(null(A)) = 0.

Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 32 / 43


ECE504: Lecture 5

Gaussian Elimination and Echelon Form


◮ GE is an algorithm for reducing a matrix to echelon form.
◮ Once you have a matrix in echelon form, you can easily determine its
range and the dimension of its nullspace.
◮ This allows you to easily answer questions about the existence and
uniqueness of solutions to Ax = b.

1. Form “augmented matrix” U = [A | b] ∈ Rm×n+1 .


2. Notation U (k, :) is the kth row of U and U (k, j) is the k, j th
element of U .
3. Force U (2, 1) = 0 by forming an appropriate combination of other
rows and subtracting this combination from U (2, :).
4. Force U (3, 1) = U (3, 2) = 0 using the same technique.
5. Keep doing this until you have an upper triangular matrix.
6. You can now solve the last row since it has only one unknown.
7. Back substitute your answer and solve the second last row.
8. Keep doing this until you solve the top row.
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 33 / 43
ECE504: Lecture 5

Gaussian Elimination and Echelon Form Examples

Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 34 / 43


ECE504: Lecture 5

Using the Echelon Form to Determine a Basis for range(A)

The pivot columns of A for a basis for the range of A. Note that the
echelon form matrix tells you which columns to pick from A. Example:
   
1 2 −1 3 0 1
−1 −2 2 −2 −1 1
A= 1
 and b =   (9)
2 0 4 0 6
0 0 2 2 −1 7

After reduction to echelon form of U = [A | b], we have


 
1 2 −1 3 0 1
0 0 1 1 −1 2

0
 (10)
0 0 0 1 3
0 0 0 0 0 0

The pivot columns here are the first, third, and fifth. Continued...
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 35 / 43
ECE504: Lecture 5

Using the Echelon Form to Determine a Basis for range(A)

Hence a basis for the range of A is


     

 1 −1 0 
      
−1 2
 , ,  −1
  1   0   0 

 

0 2 −1

You should be able to verify that these vectors are linearly independent.

The range of A is the subspace formed by all linear combinations of these


vectors. Solutions to Ax = b exist only when b ∈ range(A).

Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 36 / 43


ECE504: Lecture 5

Using the Echelon Form to Determine dim(null(A))

The dimension of the nullspace of A (also called the nullity) is simply the
number of non-pivot columns in the echelon form.
You can use the echelon form to also find a basis for null(A) (see any
good linear algebra textbook for the details). In our example, a basis for
the nullspace is
    

 −2 −4  
 
  1
   0 
 


 0  , −1
    

  0   1  

 

0 0
You should be able to verify that these vectors are linearly independent
and that Ax = 0 if x is any linear combination of these basis vectors.
Most importantly, solutions to Ax = b are unique only when
dim(null(A)) = 0.
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 37 / 43
ECE504: Lecture 5

Rank

Definition
The rank of W is defined as the dimension of the range of W , i.e.

rank(W ) := dim(range(W )).

Some useful facts:


◮ For W ∈ Rm×n , 0 ≤ rank(W ) ≤ min{m, n}.
◮ rank(W ) is equal to the number of pivot columns in the echelon
form of W .
◮ Since dim(null(W )) is equal to the number of non-pivot columns in
the echelon form, rank + nullity must equal n.
◮ 0 ≤ rank(U W ) ≤ min{rank(U ), rank(W )}. In other words, matrix
multiplication can only decrease rank.

Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 38 / 43


ECE504: Lecture 5

Matrix Transpose

Definition
Given
 
w11 w12 ... w1n
 w21 w22 ... w2n 
  m×n
W = . .. ..  ∈ R
 .. . . 
wm1 wm2 . . . wmn

the transpose of W is given as


 
w11 w21 . . . wm1
 w12 w22 . . . wm2 
W ⊤ =  .. n×m
 
.. ..  ∈ R .
 . . . 
w1n w2n . . . wmn

Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 39 / 43


ECE504: Lecture 5

An Important Property of the Matrix Transpose

For any A ∈ Rm×p and B ∈ Rp×n , the product C = AB is an m × n


real-valued matrix. The transpose of C is

C ⊤ = (AB)⊤ = B ⊤ A⊤ ∈ Rn×m

Note that the order of the matrix product has been changed by the
transpose. Do the matrix dimensions agree?

Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 40 / 43


ECE504: Lecture 5

Invertibility of Square Matrices

Definition
Given W ∈ Rn×n , we say that W is invertible if there exists V ∈ Rn×n
such that V W = W V = I n . The quantity V is called the matrix inverse
for W and we use the notation: V = W −1 .
The matrix inverse does not always exist, but when it does, it is unique.

Fact: If W is invertible, then W ⊤ is also invertible. To see this, just use


what you know about the matrix inverse and the matrix transpose

W W −1 = I n
(W W −1 )⊤ = I ⊤
n
(W −1 )⊤ W ⊤ = I n

and, by the definition, (W −1 )⊤ = (W ⊤ )−1 .


Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 41 / 43
ECE504: Lecture 5

Invertibility of Square Matrices: Equivalences

The following statements are equivalent:


1. W is invertible.
2. The only x ∈ Rn satisfying W x = 0 is x = 0.
3. For every b ∈ Rn , there exists a unique x ∈ Rn solving W x = b.
4. The echelon form of W has no rows composed of all zeros.
5. det(W ) 6= 0.
6. rank(W ) = n.
Proofs...

Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 42 / 43


ECE504: Lecture 5

Conclusions

1. Existence and uniqueness of solutions to CT and DT


systems.
2. Proofs were constructive: you can now “solve” these
systems.
3. Linear algebra tools to lay foundation for analysis of
Ak and exp (A):
◮ Subspaces

◮ Nullspace and range

◮ Rank

◮ Matrix invertibility equivalences

Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 43 / 43

You might also like