0% found this document useful (0 votes)
2 views

A Generalization of Cramers Rule and Applications

This paper presents a generalization of Cramer's Rule for solving linear equations of the form Ax = b, where A is an n × m matrix and m ≥ n. It establishes conditions for the solvability of the system and provides a formula for the solutions, which is applicable to generalized linear differential equations. The findings include the relationship between linear independence of the matrix rows and the existence of solutions, as well as applications to differential equations.

Uploaded by

Latifah Hanum
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

A Generalization of Cramers Rule and Applications

This paper presents a generalization of Cramer's Rule for solving linear equations of the form Ax = b, where A is an n × m matrix and m ≥ n. It establishes conditions for the solvability of the system and provides a formula for the solutions, which is applicable to generalized linear differential equations. The findings include the relationship between linear independence of the matrix rows and the existence of solutions, as well as applications to differential equations.

Uploaded by

Latifah Hanum
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/237323822

A Generalization of Cramer's Rule and Applications to Generalized Linear


Differential Equations

Article · January 2007

CITATION READS

1 735

2 authors, including:

Hugo Leiva
Yachay Tech-University of the Andes (Venezuela)
195 PUBLICATIONS 1,852 CITATIONS

SEE PROFILE

All content following this page was uploaded by Hugo Leiva on 16 April 2015.

The user has requested enhancement of the downloaded file.


Revista Notas de Matemática
Vol.3(2), No. 258, 2007, pp.81-99
http://www.matematica/ula.ve
Comisión de Publicaciones
Departamento de Matemáticas
Facultad de Ciencias
Universidad de Los Andes

A Generalization of Cramer’s Rule and Applications to


Generalized Linear Differential Equations.

Hugo Leiva

Abstract
In this paper we find a formula for the solutions of the following linear equation

Ax = b, x ∈ IRm , b ∈ IRn , m ≥ n,

where A = (aj,i )n×m is a n × m real matrix. We prove the following statement: For all
b ∈ IRn the system is solvable if, and only if, the set of vectors {l1 , l2 , . . . , ln } formed by the
rows of the matrix A is lineally independent in IRm . Moreover, one solution for this equation
is given by the following formula

kl1 k2 + a1i b1 < l1 , l2 > +a2i b1 · · · < l1 , ln > +ani b1


< l2 , l1 > +a1i b2 kl2 k2 + a2i b2 · · · < l2 , ln > +ani b2
.. .. .. ..
. . . .
< ln , l1 > +a1i bn < ln , l2 > +a2i bn · · · kln k2 + ani bn
xi = − 1,
kl1 k2 < l1 , l2 > · · · < l1 , ln >
< l2 , l1 > kl2 k2 · · · < l2 , ln >
.. .. .. ..
. . . .
< ln , l1 > < ln , l2 > ··· kln k2

i = 1, 2, 3, . . . , m. Finally, we apply these reults to find a solutions of the following general


linear differential equation

B ẋ(t) = Ax(t) + f (t), t ∈ IR x ∈ IRm ,

where f ∈ L1loc (IR; IRn ) and B is a n × m

key words. linear equation, Cramer’s rule, linear differential equation.

Resumen
En este artículo se encuentra una formula para las soluciones de la siguiente ecuación lineal

Ax = b, x ∈ IRm , b ∈ IRn , m ≥ n,
donde A = (aj,i )n×m es una matriz real n × m. Probaremos el siguiente resultado: Para todo
b ∈ IRn el sistema soluble si, y sólo si, el conjunto de vectores {l1 , l2 , . . . , ln } formado por las
81
82 Hugo Leiva

filas de la matriz A es linealmente independiente en IRm . Más aún, una solución para esta
ecuación viene dada por la siguiente fórmula

kl1 k2 + a1i b1 < l1 , l2 > +a2i b1 · · · < l1 , ln > +ani b1


< l2 , l1 > +a1i b2 kl2 k2 + a2i b2 · · · < l2 , ln > +ani b2
.. .. .. ..
. . . .
< ln , l1 > +a1i bn < ln , l2 > +a2i bn · · · kln k2 + ani bn
xi = − 1,
kl1 k2 < l1 , l2 > · · · < l1 , ln >
< l2 , l1 > kl2 k2 · · · < l2 , ln >
.. .. .. ..
. . . .
< ln , l1 > < ln , l2 > ··· kln k2
i = 1, 2, 3, . . . , m. Finalmente, se aplica estos resultados para hallar soluciones de la siguiente
ecuación diferencial lineal general

B ẋ(t) = Ax(t) + f (t), t ∈ IR x ∈ IRm ,


donde f ∈ L1loc (IR; IRn ) y B es un n × m.

Palabras claves: Ecuación lineal, formula de Cramer, ecuación diferencial lineal.

AMS(MOS) subject classifications. primary: 15A09; secondary: 15A04.

1 Introduction

In this paper we find a formula for the solutions of the following linear equation

Ax = b, x ∈ IRm , b ∈ IRn , m ≥ n, (1)

or


 a1,1 x1 + a1,2 x2 + · · · + a1,m xm = b1
 a2,1 x1 + a2,2 x2 + · · · + a2,m xm = b2

.. .. .. .. .. (2)


 . . . . .
an,1 x1 + an,2 x2 + · · · + an,m xm = bn

Now, if we define the column vectors


     
a1,1 a2,1 an,1
 a1,2   a2,2   an,2 
l1 =  .  , l2 =  . ,······ , ln =  .. ,
     
.
 .   ..   . 
a1,m a2,m an,m

then the system (2) also can be written as follows:

< li , x >= bi , i = 1, 2, . . . , n, (3)


A Generalization of Cramer’s Rule and Applications to Generalized Linear Differential Equations. 83

where < ·, · > denotes the innerproduct in IRm and A is n × m real matrix. Usually, one can
apply Gauss Elimination Method to find some solutions of this system, this method is a systematic
procedure for solving systems like (1); it is based on the idea of reducing the augmented matrix
 
a1,1 a1,2 · · · a1,m b1
 a2,1 a2,2 · · · a2,m b2 
 .. .. .. .. ..  , (4)
 
 . . . . . 
an,1 an,2 · · · an,m bn

to the form that is simple enough such that the system of equations can be solved by inspec-
tion. But, to my knowledge, in general there is not formula for the solutions of (1) in terms of
determinants if m 6= n.

When m = n and det(A) 6= 0 the system (1) admits only one solution given by x = A−1 b,
and from here one can deduce the well known Cramer Rule which says:

Theorem 1.1 (Cramer Rule 1704-1752) If A is n × n matrix with det(A) 6= 0, then the solution
of the system (1) is given by the formula:

det((A)i )
xi = , i = 1, 2, 3, . . . , n, (5)
det(A)

where (A)i is the matrix obtained by replacing the entries in the ith column of A by the entries
in the matrix  
b1
 b2 
.. .
 

 . 
bn

A simple and interested generalization of Cramer Rule was done by Prof. Dr. Sylvan Burgstahler
([2]) from University of Minnesota, Duluth, where he taught for 20 years. This result is given by
the following Theorem:

Theorem 1.2 (Burgstahler 1983) If the system of equations



 a1,1 x1 + a1,2 x2 + · · · + a1,n xn = b1

 a2,1 x1 + a2,2 x2 + · · · + a2,n xn = b2

.. .. .. .. .. (6)


 . . . . .
an,1 x1 + an,2 x2 + · · · + an,n xn = bn

84 Hugo Leiva

has(unique) solution x1 , x2 , . . . , xn , then for all λi ∈ IR, i = 1, 2, . . . , n one has

a1,1 + λ1 b1 a1,2 + λ2 b1 ··· a1,n + λn b1


a2,1 + λ1 b2 a2,2 + λ2 b2 ··· a2,n + λn b2
.. .. .. ..
. . . .
an,1 + λ1 bn an,2 + λ2 bn · · · an,n + λn bn
λ1 x1 + λ2 x2 + · · · + λn xn = −1 (7)
a1,1 a1,2 · · · a1,n
a2,1 a2,2 · · · a2,n
.. .. .. ..
. . . .
an,1 an,2 · · · an,n

In this work we prove the following Theorems:

Theorem 1.3 For all b ∈ IRn the system(1) is solvable if, and only if,

det(AA∗ ) 6= 0. (8)

Moreover, one solution for this equation is given by the following formula

x = A∗ (AA∗ )−1 b, (9)

where A∗ is the transpose of A ( or the conjugate transpose of A in the complex case).

Also, this solution coincides with the Cramer formula when n = m. In fact, this formula are
given as follows:
n
X det((AA∗ )j )
xi = aj,i , i = 1, 2, 3, . . . , m, (10)
det(AA∗ )
j=1

where (AA∗ )j is the matrix obtained by replacing the entries in the jth column of AA∗ by the
entries in the matrix  
b1
 b2 
.. .
 

 . 
bn
In addition, this solution has minimum norm. i.e.,

kxk = ı́nf{kwk : Aw = b, w ∈ IRm }, (11)

and kxk = kwk with Aw = b ⇐⇒ x = w.


A Generalization of Cramer’s Rule and Applications to Generalized Linear Differential Equations. 85

Theorem 1.4 The solution of (1)- (3) given by (9) can be written as follows:

kl1 k2 + a1i b1 < l1 , l2 > +a2i b1 · · · < l1 , ln > +ani b1


< l2 , l1 > +a1i b2 kl2 k2 + a2i b2 · · · < l2 , ln > +ani b2
.. .. .. ..
. . . .
< ln , l1 > +a1i bn < ln , l2 > +a2i bn · · · kln k2 + ani bn
xi = − 1, i = 1, 2, 3, . . . , m.
kl1 k2 < l1 , l2 > · · · < l1 , ln >
< l2 , l1 > kl2 k2 ··· < l2 , ln >
.. .. .. ..
. . . .
< ln , l1 > < ln , l2 > · · · kln k2
(12)

Theorem 1.5 The system (1) is solvable for each b ∈ IRn if, and only if, the set of vectors
{l1 , l2 , . . . , ln } formed by the rows of the matrix A is lineally independent in IRm .

Moreover, a solution for the system (1) is given by the following formula:
 
υ1i υ2i < l2 , υ1 >
xi = b1 + b2 − c1 (13)
kυ1 k2 kυ2 k2 kυ1 k2
n−1
!
υni X < ln , υi >
+ +··· + bn − ci , i = 1, 2, . . . , m, (14)
kυn k2 kυi k2
i=1

where the set of vectors {υ1 , υ2 , . . . , υn } is obtain by the Gram-Schmidt process and the numbers
c1 , c2 , . . . , cn are given by

c1 = b1 (15)
< l2 , υ1 >
c2 = b2 − c1
kυ1 k2
< l3 , υ1 > < l3 , υ2 >
c3 = b3 − 2
c1 − c2
kυ1 k kυ2 k2
.. .. .. .. ..
. . . . .
n−1
X < ln , υi >
cn = bn − ci ,
kυi k2
i=1

and υi = [υi1 , υi2 , υi3 , . . . , υim ]⊤, i = 1, 2, . . . , n.

Finally, we apply these results to find a solutions of the following generalized linear differential
equation

B ẋ(t) = Ax(t) + f (t), t ∈ IR x ∈ IRm , (16)

where f ∈ L1loc (IR; IRn ) and B is a n × m.


86 Hugo Leiva

2 Proof of the Main Theorems

In this section we shall prove Theorems 1.3, 1.4, 1.5 and more. To this end, we shall denote by

< x, y > the Euclidian innerproduct in IRk and the associated norm by k xk = < x, x >. Also,
we shall use some ideas from [3] and the following result from [1], pp 55.

Lemma 2.1 Let W and Z be Hilbert space, G ∈ L(W, Z) and G∗ ∈ L(Z, W ) the adjoint opera-
tor, then the following statements holds,

(i) Rang(G) = Z ⇐⇒ ∃γ > 0 such that

kG∗ zkW ≥ γkzkZ , z ∈ Z.

(ii) Rang(G) = Z ⇐⇒ Ker(G∗ ) = {0} ⇐⇒ G∗ is 1 − 1.

Proof of Theorem 1.3. The matrix A may also viewed as a linear operator A : IRm → IRn ;
therefore A ∈ L(IRm , IRn ) and its adjoint operator A∗ is the transpose of A and A∗ : IRn → IRm .

Then, system (1) is solvable for all b ∈ IRn if, and only if, the operator A is surjective. Hence,
from the Lemma 2.1 there exists γ > 0 such that

kA∗ zkIRm ≥ γkzkIRn , z ∈ IRn .

Therefore,
< AA∗ z, z >≥ γ 2 kzk2IRn , z ∈ IRn .

This implies that AA∗ is one to one. Since AA∗ is a n × n matrix, then det(AA∗ ) 6= 0.

Suppose now that det(AA∗ ) 6= 0. Then (AA∗ )−1 exists and given b ∈ IRn we can see that
x = A∗ (AA∗ )−1 b is a solution of Az = b.

Now, since z = (AA∗ )−1 b is the only solution of the equation

(AA∗ )w = b,

then from Theorem 1.1 (Cramer Rule) we obtain That:

det((AA∗ )1 ) det((AA∗ )2 ) det((AA∗ )n )


z1 = , z2 = , · · · , zn = ,
det(AA∗ ) det(AA∗ ) det(AA∗ )
A Generalization of Cramer’s Rule and Applications to Generalized Linear Differential Equations. 87

where (AA∗ )i is the matrix obtained by replacing the entries in the ith column of AA∗ by the
entries in the matrix  
b1
 b2 
.. .
 

 . 
bn
Then, the solution x = A∗ (AA∗ )−1 b of (1) can be written as follows

det((AA∗ )1 ) det((AA∗ )j )
   P 
n
det(AA∗ ) j=1 aj,1 det(AA∗ )
 
a1,1 a2,1 · · · an,1
det((AA∗ )2 ) det((AA∗ )j )
   P 
 a1,2 a2,2 · · · n
an,2 
j=1 aj,2 det(AA∗ )
  
x= .
 det(AA∗ )  
=

.
.. .. ..

 .. ..
 .. . . .  .
  
   . 
a1,m a2,m · · · an,m det((AA∗ )n ) det((AA∗ )j )
   P 
n
det(AA∗ ) j=1 aj,m det(AA∗ )

Now, we shall see that this solution has minimum norm. In fact, consider w in IRm such that
Aw = b and

kwk2 = kx + (w − x)k2 = kxk2 + 2Re < x, w − x > +kw − xk2 .

On the other hand,

< x, w − x >=< A∗ (AA∗ )−1 b, w − x =< (AA∗ )−1 b, Aw − Ax >=< (AA∗ )−1 b, b − b >= 0.

Hence, kwk2 − kxk2 = kw − xk2 ≥ 0.


Therefore, kxk ≤ kwk, and kxk = kwk iff x = w.

Proof of Theorem 1.4. From Theorem 1.3 this solution is given by

xi = a1,i z1 + a2,i z2 + · · · + an,i zn , i = 1, 2, . . . , m,

where
det((AA∗ )j )
zj = , j = 1, 2, . . . , n,
del(AA∗ )
is the only solution of the system
AA∗ z = b,

given by Theorem 1.1(Cramer Rule).


88 Hugo Leiva

On the other hand, we have the following expression for AA∗


  
a1,1 a1,2 · · · a1,m a1,1 a2,1 · · · an,1
 a2,1 a2,2 · · · a2,m   a1,2 a2,2 · · · an,2 
AA∗ =  . .. .. ..   .. .. .. ..
  
 ..

. . .  . . . . 
an,1 a2,2 · · · an,m a1,m a12,m · · · an,m
kl1 k2
 
< l1 , l2 > · · · < l1 , ln >
 < l2 , l1 > kl2 k2 · · · < l2 , ln > 
=  .. .. .. .. .
 
 . . . . 
< ln , l1 > < ln , l2 > · · · kln k2

Then, applying Theorem 1.2 we obtain:

xi = a1,i z1 + a2,i z2 + · · · + an,i zn =


kl1 k2 + a1i b1 < l1 , l2 > +a2i b1 · · · < l1 , ln > +ani b1
< l2 , l1 > +a1i b2 kl2 k2 + a2i b2 · · · < l2 , ln > +ani b2
.. .. .. ..
. . . .
< ln , l1 > +a1i bn < ln , l2 > +a2i bn · · · kln k2 + ani bn
= − 1.
kl1 k2 < l1 , l2 > · · · < l1 , ln >
< l2 , l1 > kl2 k2 ··· < l2 , ln >
.. .. .. ..
. . . .
< ln , l1 > < ln , l2 > · · · kln k2

This complete the proof of the Theorem.

Proof of Theorem 1.5. Suppose the system is solvable for all b ∈ IRn . Now, assume the
existence of real numbers ci , i = 1, 2, . . . , n such that

c1 l1 + c2 l2 + c3 l3 + · · · + cn ln = 0.

Then, there exists x ∈ IRm such that




 a1,1 x1 + a1,2 x2 + · · · + a1,m xm = c1
 a2,1 x1 + a2,2 x2 + · · · + a2,m xm = c2

.. .. .. .. ..


 . . . . .
an,1 x1 + an,2 x2 + · · · + an,m xm = cn

In other words,
< li , x >= ci , i = 1, 2, . . . , n.

Hence,
< ci li , x >= c2i , i = 1, 2, . . . , n.
A Generalization of Cramer’s Rule and Applications to Generalized Linear Differential Equations. 89

So,
< c1 l1 + c2 l2 + c3 l3 + · · · + cn ln , x >= c21 + c22 + c23 + · · · + c2n = 0.

Therefore, c1 = c2 = · · · = cn = 0, which prove the independence of {l1 , l2 , . . . , ln }.

Now, suppose that the set {l1 , l2 , . . . , ln } is linearly independent in IRm . Using the Gram-
Schmidt process we can find a set {υ1 , υ2 , . . . , υn } of orthogonal vectors in IRm given by the
formula:

υ1 = l1 (17)
< l2 , υ1 >
υ2 = l2 − υ1
kυ1 k2
< l3 , υ1 > < l3 , υ2 >
υ3 = l3 − υ1 − υ2
kυ1 k2 kυ2 k2
.. .. .. .. ..
. . . . .
n−1
X < ln , υi >
υn = ln − υi
kυi k2
i=1

Then, system (1) will be equivalent to the following system




 < υ1 , x >= c1
< υ2 , x >= c2




< υ3 , x >= c3 (18)
 .. .. ..
. . .




< υn , x >= cn

where

c1 = b1 (19)
< l2 , υ1 >
c2 = b2 − c1
kυ1 k2
< l3 , υ1 > < l3 , υ2 >
c3 = b3 − 2
c1 − c2
kυ1 k kυ2 k2
.. .. .. .. ..
. . . . .
n−1
X < ln , υi >
cn = bn − ci
kυi k2
i=1

If we denote the vectors υi ’s by


 
υi1

 υi2 

υi = 
 υi3 ,

i = 1, 2, . . . , n,
 .. 
 . 
υim
90 Hugo Leiva

and the n × m matrix Υ by  


υ1,1 υ1,2 · · · υ1,m
 υ2,1 υ2,2 · · · υ2,m 
Υ= .. .. .. ..
 

 . . . . 
υn,1 υn,2 · · · υn,m
Then, applying Theorem 1.3 we obtain that, system (18) has solution for all C ∈ IRn if, and only
if, det(ΥΥ∗ ) 6= 0. But,
kυ1 k2
 
0 0 ··· 0
k2
 
kυ1 < υ1 , υ2 > · · · < υ1 , υn >  0 kυ2 k2 0 · · · 0 
 < υ2 , υ1 > kυ2 k2 ··· < υ2 , υn > 
 
.. .. .. .. ..
ΥΥ∗ = 
  
.. .. .. .. = . . . . .
 
. . . .   .. .. .. .. ..
 
 
< υn , υ1 > < υn , υ2 > · · · kυn k2
 . . . . . 
0 0 0 ··· kυn k2
So,
det(ΥΥ∗ ) = kυ1 k2 kυ2 k2 · · · kυn k2 6= 0.
From here and using the formula (9) we complete the proof of this Theorem.

2.1 Examples and Particular Cases

In this section we shall consider some particular cases and examples to illustrate the results of
this work.

Example 2.1 Consider the following particular case of system (1)

a1,1 x1 + a1,2 x2 + · · · + a1,m xm = b. (20)

In this case n = 1 and A = [a1,1 , a1,2 , · · · , a1,m ]. Then, if we define the column vector
 
a1,1
 a1,2 
l1 =  .  ,
 
 .  .
a1,m
 
a1,1
i  a1,2 
AA∗ = a1,1 a1,2 ... a1,m  .  = kl1 k2 .
h
 
 .. 
a1,m
Then, (AA∗ )−1 b = bkl1 k−2 and
a1,1 bkl1 k−2
 
 a1,2 bkl1 k−2 
x = A∗ (AA∗ )−1 b =  .. .
 
 . 
a1,m bkl1 k−2
A Generalization of Cramer’s Rule and Applications to Generalized Linear Differential Equations. 91

Therefore, a solution of the system (20) is given by:


a1i b a1i b
xi = = Pm 2 , i = 1, 2, . . . , m. (21)
kl1 k2 j=1 a1j

Example 2.2 Consider the following particular case of system (1)


a1,1 x1 + a1,2 x2 + · · · + a1,m xm = b1
(22)
a2,1 x1 + a2,2 x2 + · · · + a2,m xm = b2
In this case n = 2 and  
a1,1 a1,2 x2 · · · a1,m
A= ,
a2,1 a2,2 · · · a2,m
Then, if we define the column vectors
   
a1,1 a2,1
 a1,2   a2,2 
l1 =  .. , l2 =  .. ,
   
 .   . 
a1,m a2,m
then
 
a1,1 a2,1
a1,2 a2,2 kl1 k2
    
a1,1 a1,2 x2 · · · a1,m < l1 , l2 >
AA∗ = = .
 
a2,1 a2,2 · · · a2,m
 .. .. 
< l2 , l1 > kl2 k2
 . . 
a1,m a12,m

Hence, from the formula (36) we obtain that:


   
x1 a1,1 a2,1
 det((AA )1 )
 ∗

 x2   a1,2 a2,2
  det(AA∗ ) 
 ..  = A∗ (AA∗ )−1 b =  .. ..
  
 det((AA∗ )2 )
 .   . . 
det(AA∗ )
xm a1,m a12,m

Therefore, a solution of the system (22) is given by:


b1 kl2 k2 − b2 < l1 , l2 > b2 kl1 k2 − b1 < l2 , l1 >
x1 = a11 + a21 (23)
kl1 k2 kl2 k2 − | < l1 , l2 > |2 kl1 k2 kl2 k2 − | < l1 , l2 > |2
b1 kl2 k2 − b2 < l1 , l2 > b2 kl1 k2 − b1 < l2 , l1 >
x2 = a12 + a22 (24)
kl1 k2 kl2 k2 − | < l1 , l2 > |2 kl1 k2 kl2 k2 − | < l1 , l2 > |2
.. .. .. .. .. .. .. .. ..
. = . . . . . . . . (25)
2
b1 kl2 k − b2 < l1 , l2 > 2
b2 kl1 k − b1 < l2 , l1 >
xm = a1m 2 2 2
+ a2m (26)
kl1 k kl2 k − | < l1 , l2 > | kl1 k2 kl2 k2 − | < l1 , l2 > |2
Now, we shall apply the foregoing formula to find the solution of the following system

x1 + x2 = 1
(27)
−x1 + x2 + x3 = −1
92 Hugo Leiva

If we define the column vectors


   
1 −
l1 =  1  , l2 =  1  ,
0 1

then det(AA∗ ) = kl1 k2 kl2 k2 − | < l1 , l2 > |2 = kl1 k2 kl2 k2 = 6 and x1 = 56 , x2 = 16 , and x3 = −2
6 .

Example 2.3 Consider the following general case of system (1)




 a1,1 x1 + a1,2 x2 + · · · + a1,m xm = b1
 a2,1 x1 + a2,2 x2 + · · · + a2,m xm = b2

.. .. .. .. .. (28)


 . . . . .
an,1 x1 + an,2 x2 + · · · + an,m xm = bn

Then, if {l1 , l2 , . . . , ln } is an orthogonal set in IRm , we get

kl1 k2
 
0 0 ··· 0
 0 2
kl2 k 0 · · · 0 
 

 .. .
. .
. .
. .. 
AA =   . . . . . ,
 .. .. .. .. ..


 . . . . . 
0 0 0 · · · kln k2

and the solution of the system (1) is very simple and given by:
n
X
xi = aj,i bj klj k−2 , i = 1, 2, . . . , m. (29)
j=1

Now, we shall apply the formula (29) to find solution of the following system

 −x1 − x2 + x3 + x4 = 1
−x1 + x2 − x3 + x4 = 1 (30)
x1 − x2 − x3 + x4 = 1

If we define the column vectors


     
−1 −1 1
 −1   1   −1 
l1 = 
 1 ,
 l2 = 
 −1  ,
 l3 = 
 −1  .

1 1 1

Then, {l1 , l2 , L3 } is an orthogonal set in IR3 and the solution of this system is given by: x1 = −1
4 ,
−1 −1
x2 = 4 , x3 = 4 and x4 = 34 .
A Generalization of Cramer’s Rule and Applications to Generalized Linear Differential Equations. 93

3 Variational Method to Obtain Solutions

The Theorems 1.3, 1.4 and 1.5 give a formula for one solution of the system (1) which has
minimum norma. But, it is no the only way allowing to build solutions of this equation. Next,
we shall present a variational method to obtain solutions of (1) as a minimum of a quadratic
functional  : IRn → IR,
1 ∗ 2
(ξ) = kA ξk − < b, ξ >, ∀ξ ∈ IRn . (31)
2
Proposition 3.1 For a given b ∈ IRn the equation (1) has a solution x ∈ IRm if, and only if,

< x, A∗ ξ > − < b, ξ >= 0, ∀ξ ∈ IRn . (32)

It is easy to see that (32) is in fact an optimality condition for the critical points of the quadratic
functional  define above.

Lemma 3.1 Suppose the quadratic functional  has a minimizer ξb ∈ IRn . Then,

xb = A∗ ξb , (33)

is a solution of (1).

Proof . First, observe that  has the following form


1
(ξ) = < AA∗ ξ, ξ > − < b, ξ >, ∀ξ ∈ IRn .
2
Then, if ξb is a point where  achieves its minimum value, we obtain that
d
{}(ξb ) = AA∗ ξb − b = 0.

So, AA∗ ξb = b and xb = A∗ ξb is a solution of (1).

Remark 3.1 Under the condition of Theorem 1.3, the solution given by the formulas (33) and (
9) coincide.

Theorem 3.1 The system ( 8) is solvable if, and only if, the quadratic functional  define by (
31) has a minimum for all b ∈ IRn
94 Hugo Leiva

Proof . Suppose ( 8) is solvable. Then, the matrix A viewed as an operator from IRm to IRn is
surjective. Hence, from Lemma 2.1 there exists γ > 0 such that

kA∗ ξk2 ≥ γ 2 kξk2 , ξ ∈ IRn .

Then,
γ2
(ξ) ≥ kξk2 − kbkkξk, ξ ∈ IRn .
2
Therefore,
lı́m (ξ) = ∞.
kξk→∞
Consequently,  is coercive and the existence of a minimum is ensured.

The other way of the proof follows as in proposition 3.1.

Now, we shall consider an example where Theorems 1.3, 1.4 and 1.5 can not be applied, but
proposition 3.1 does.

Example 3.1 consider the system with linearly independent rows



x1 + x2 + x3 = 1
2x1 + 2x2 + 2x3 = 2
In this case n = 2 and    
1 1 1 1
A= and b = .
2 2 2 2
Then  
  1 2  
1 1 1  1 2 = 3 6 .
AA∗ =
2 2 2 6 12
1 2
Therefore, the critical points of the quadratic functional  given by (31) satisfy the equation

AA∗ ξ = b.

i.e.,
3ξ1 + 6ξ2 = 1
6ξ1 + 12ξ2 = 2
So, there are infinitely many critical points given by
 1 
− 2a
ξ= 3 , a ∈ IR.
a
Hence, a solution of the system is given by
1
   
1 2  1

3
− 2a
x = A∗ ξ =  1 2  3 = 1
3

a 1
1 2 3
A Generalization of Cramer’s Rule and Applications to Generalized Linear Differential Equations. 95

4 The Case m < n

The case m < n is undetermined since the equation Ax = b has solution only when b ∈ Rang(A).
But, nevertheless we can produce the following Theorems:

Theorem 4.1 Suppose m < n and A : IRm → IRn is one to one. Then, for all b ∈ Rang(A) the
equation

Ax = b, x ∈ IRm , b ∈ IRn , m < n, (34)

admits only one solution given by

x = (A∗ A)−1 A∗ b. (35)

Moreover, this solution is given by the following formula:


det((A∗ A)i )
xi = , i = 1, 2, 3, . . . , m, (36)
det(A∗ A)

where (A∗ A)i is the matrix obtained by replacing the entries in the ith column of AA∗ by the
entries in the matrix  Pn 
aj,1 bj
Pj=1
n
j=1 aj,2 bj
 
 
 .. 
Pn .
 
j=1 aj,m bj

Proof . Since A is one to one, from Lemma 2.1 we have that A∗ : IRn → IRm is surjective, and
consequently det(A∗ A) 6= 0. Therefore, (A∗ A)−1 exists and

x = (A∗ A)−1 A∗ b, (37)

is the only solution of (34). In fact, if x ∈ IRm is the only solution of (34), then A∗ Ax = A∗ b and
(A∗ A)−1 A∗ Ax = (A∗ A)−1 A∗ b. So, x = (A∗ A)−1 A∗ b. The remainder of the proof follows in the
same way as in Theorem 1.3

Theorem 4.2 If the set of vectors {l1 , l2 , . . . , ln } formed by the columns of the matrix A is an
orthogonal set in IRm , then

kl1 k2
 
0 ··· 0
 0 2
kl2 k · · · 0 
A∗ A =  .. .. .. .. .
 
 . . . . 
0 0 ··· kln k2
96 Hugo Leiva

and the solution x = (A∗ A)−1 A∗ b of the system (34) takes the following simple form
1 Pn
 
a b
kl1 k2 Pj=1 j,1 j
1 n
 kl2 k2 j=1 aj,2 bj
 

x= .. .
.
 
 
1 Pn
klm k 2 j=1 aj,m bj

5 Generalized Linear Equations

Let A and B be n × m matrixes with m ≥ n such that det(AA∗ ) 6= 0 and the eigenvalues of
BA∗ (AA∗ )−1 are not zero. Under these conditions we consider the following generalized linear
system of differential equations with implicit derivative

B ẋ(t) = Ax(t) + f (t), t ∈ IR, f ∈ L1loc (IR, IRn ). (38)

Before study the non homogeneous equation (38) we shall look for the solution of the homogeneous
equation

B ẋ(t) = Ax(t), t ∈ IR. (39)

We shall look for the solution of (39) in the following form:

x(t) = eλt ξ with λ 6= 0, ξ = A∗ (AA∗ )−1 b, b ∈ IRn , ξ ∈ IRm .

Then, if we putt S = A∗ (AA∗ )−1 we get the following algebraic equations for λ and b:

(λBS − I)b = (λBA(AA∗ )−1 − I)b = 0, b 6= 0. (40)

det(λBS − I) = 0. (41)

Lemma 5.1 If α is an eigenvalue of the matrix BS = BA∗ (AA∗ )−1 with corresponding eigen-
vector b ∈ IRn , then x(t) = eλt ξ, with λ = α−1 and ξ = Sb, is a solution of (39).

Proof . If x(t) = eλt ξ, then

Ax(t) = Aeλt ξ = eλt ASb


= eλt b = eλt λBSb
= λeλt Bξ = B(λeλt ξ) = B ẋ(t).
A Generalization of Cramer’s Rule and Applications to Generalized Linear Differential Equations. 97

1
Corollary 5.1 If BS possess n linearly independent eigenvectors b1 , b2 , . . . , bn and λj be the real
1 1 1
eigenvalue that corresponds to bj (The numbers λ1 , λ2 , . . . , λn need not all be distinct). Then, for
all cj ∈ IR, j = 1, 2, . . . , n
n
X
x(t) = cj eλj t ξj , ξj = Sbj , (42)
j=1

is a general solution of the equation (39).

Example 5.1 Consider the following system of differential equations



ẋ1 + ẋ2 + ẋ3 = x1 − x2 + x3
(43)
ẋ1 − ẋ2 = −x1 + x2 + 2x3
Here,    
1 1 1 1 1 1
B= and A= .
1 −1 0 −1 1 2
Then,
1 1
 
1 1
   
3 0 3 6
∗ ∗ ∗ −1 −1 1 3 3
AA = 1 , A (AA ) = 3 6
 and BS = 2 1
0 6 1 1 3 3
3 3
Hence, the eigenvalues and corresponding eigenvectors of BS are respectively:
" √ # " √ #
1 3+1 −1 −( 3−1)
α1 = √ , b1 = 2 , α2 = √ , b2 = 2 .
3 1 3 1
On the other hand,
 √   √ 
3 − 3
6
√ √6
ξ1 = Sb1 =  − 3 and ξ2 = Sb2 =  3
.
   
√6 √6

3+3 3−3
6 6
Then,
 √   √ 
3 − 3
√ 6
√ √ √6
31 t 
x1 (t) = eλ1 t ξ1 = e − 3 and x2 (t) = eλ2 t ξ2 = e− 31 t  3
 
√6 √6
   
3+3 3−3
6 6
are two solutions of (43).

Theorem 5.1 (Variation Constant Formula) If BS possess n linearly independent eigenvectors


1 1 1 1
b1 , b2 , . . . , bn and λj be the real eigenvalue that corresponds to bj (The numbers λ1 , λ2 , . . . , λn
need not all be distinct). Then, for all cj ∈ IR, j = 1, 2, . . . , n
 
Xn Z t Xn
x(t) = cj eλj t ξj +  eλj (t−s) βj (s)ξj  ds, ξj = Sbj , (44)
j=1 0 j=1
Pn
where f (t) = j=1 βj (t)bj is a general solution of the equation (38).
98 Hugo Leiva

Corollary 5.2 Under the conditions of the foregoing Theorem, if b1 , b2 , . . . , bn are orthogonal,
then a general solution of (38) is given by

 
n
X Z t Xn
x(t) = cj eλj t ξj +  eλj (t−s) < bj , f (s) > ξj  ds, ξj = Sbj , (45)
j=1 0 j=1

Proof of Theorem 5.1.. Clearly that a solution x(t) of (38) has the form x(t) = xh (t) + xp (t),
where xh (t) is a solution of the homogeneous equation (39) and xp (t) is a particular solution of
the non homogeneous equation (38). So, we look for a particular solution of (38) by the variation
constant method; that is to say, we need to find functions cj (t), j = 1, 2, . . . , n such that
n
X
x(t) = cj (t)eλj t ξj ,
j=1

is a solution of (38). To this end, let us consider the following expression:


n
X n
X
B ẋ = ċj (t)eλj t Bξj + cj (t)Bλj eλj t ξj
j=1 j=1
n n
X X d  λj t 
= ċj (t)eλj t Bξj + cj (t)B e ξj
dt
j=1 j=1
n
X n
X
= ċj (t)eλj t Bξj + cj (t)Aeλj t ξj
j=1 j=1
n
X
= f (t) + cj (t)Aeλj t ξj .
j=1

Then, we have to solve the equation for the unknown cj (t),


n
X
ċj (t)eλj t Bξj = f (t),
j=1

which is equivalent to the equation


n
X n
X
ċj (t)eλj t bj = f (t) = βj (t)bj .
j=1 j=1

Since b1 , b2 , . . . , bn are linearly independent, then


Z t
cj (t) = e−λj s βj (s)ds.
0
This completes the proof of the Theorem.
A Generalization of Cramer’s Rule and Applications to Generalized Linear Differential Equations. 99

Referencias

[1] R.F. CURTAIN and A.J. PRITCHARD, “Infinite Dimensional Linear Systems”, Lecture
Notes in Control and Information Sciences, Vol. 8. Springer Verlag, Berlin (1978).

[2] S. Burgstahier, “A Generalization of Cramer’s Rulle” The Two-Year College Mathematics


Journal, Vol. 14, N0.3. (Jun., 1983), pp. 203-205.

[3] E. ITURRIAGA and H. LEIVA, “A Necessary and Sufficient Conditions for the Controlla-
bility of Linear System in Hilbert Spaces and Applications” IMA Journal Mathematical and
Information, PP.1-12,doi:10.1093/imamci/dnm017 (2007).

HUGO LEIVA
Departamento de Matemáticas, Facultad de Ciencias,
Universidad de Los Andes
Mérida 5101, Venezuela
e-mail: [email protected]

View publication stats

You might also like