0% found this document useful (0 votes)
2 views

Ch 4 Notes Packet 2022.Doc

The document discusses vector spaces and subspaces, defining a vector space as a set of objects called vectors with operations of addition and scalar multiplication that satisfy ten axioms. It explains common vector spaces, such as R1, R2, and R3, and introduces the concepts of null spaces and column spaces related to matrices, emphasizing their properties as subspaces. Additionally, it covers linear independence, dependence, and the criteria for determining if a set of vectors forms a basis for a vector space.

Uploaded by

sb314mldummy2
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Ch 4 Notes Packet 2022.Doc

The document discusses vector spaces and subspaces, defining a vector space as a set of objects called vectors with operations of addition and scalar multiplication that satisfy ten axioms. It explains common vector spaces, such as R1, R2, and R3, and introduces the concepts of null spaces and column spaces related to matrices, emphasizing their properties as subspaces. Additionally, it covers linear independence, dependence, and the criteria for determining if a set of vectors forms a basis for a vector space.

Uploaded by

sb314mldummy2
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

4.

1 VECTOR SPACES & SUBSPACES


Many concepts concerning vectors in Rⁿ can be extended to other mathematical systems.

We can think of a vector space in general, as a collection of objects that behave as vectors do in Rⁿ. The objects of such a set are
called vectors.

A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication
by scalars (real numbers), subject to the ten axioms below. The axioms must hold for all u, v and w in V and for all scalars c and d.
1. u + v is in V.
2. u + v = v + u
3. (u + v) + w = u + (v + w)
4. There is a vector (called the zero vector) 0 in V such that u + 0 = u.
5. For each u in V, there is vector -u in V satisfying u + (-u) = 0.
6. cu is in V.
7. c(u + v) = cu+cv.
8. (c + d)u = cu+du.
9. (cd)u = c(du).
10. 1u = u.

Common Vector Spaces


R1—the set of all real numbers
R2—the set of ordered pairs of real numbers (the xy-plane)
R3—three-dimensional Euclidean space

Rn—the set of all column (or row vectors with n components where the components are real numbers

Vector spaces where the scalars are complex numbers are called complex vector spaces. If the scalars are real, they are called real
vector spaces.
The simplest of all vector spaces is the zero vector space.

Vector Space Examples

Ex 1: Let M2x2 = (In this context, what is the 0 vector?)

Ex 2: Let n ≥ 0 be an integer and let Pn = the set of all polynomials of degree at most n ≥ 0.

Members of Pn have the form p(t)=a₀ + a₁t + a₂t² + ⋯ + antn where a₀,a₁,…,an are real numbers and t is a real variable. The set Pn is
a vector space. To prove it is a vector space, we must prove that all 10 axioms hold. We will just verify 3 out of the 10 axioms here.

Let p(t)=a₀ + a₁t + a₂t² + ⋯ + antn and q(t)=b₀ + b₁t + b₂t² + ⋯ + bntn . Let c be a scalar.

Axiom 1 : The polynomial p + q is defined as follows: (p + q)(t)=p(t)+q(t). Therefore, (p + q)(t)=p(t)+q(t) = (______) + (______)t

+ . . . + (______)tn which is also a _____________________ of degree at most ________. So p + q is in Pn

Axiom 4: 0 = 0 + 0t + ⋯ + 0tⁿ (zero vector in Pn)

(p + 0)(t) = p(t) + 0 = (a₀+ 0)+(a₁+ 0)t + ⋯ + (an + 0)tⁿ = a₀ + a₁t + ⋯ + an tⁿ = p(t) and so p + 0 = p

Axiom 6: (cp)(t) = cp(t) = (______) + (______)t + ⋯ + (______)tⁿ which is in Pn

The other 7 axioms also hold, so Pn is a vector space.


There are important vector spaces “inside” the standard spaces Rn. For example, consider R3 and choose any plane that passes through
the origin—that plane is a vector space in its own right
●​ The zero vector is part of the set
●​ If we multiply a vector in the plane by a scalar, we get a vector that lies in the same plane
●​ If we add any 2 vectors in the plane we get a vector in the plane.
This plane illustrates on of the most fundamental ideas in linear algebra—it is a subspace of the original space R3

Subspaces: Vector spaces may be formed from subsets of other vectors spaces. These are called subspaces.
A subspace of a vector space V is a subset H of V that has three properties:

a. The zero vector of V is in H.


b. For each u and v are in H, u + v is in H. (In this case we say H is closed under vector addition.)
c. For each u in H and each scalar c, cu is in H. (In this case we say H is closed under scalar multiplication.)

If the subset H satisfies these three properties, then H itself is a vector space.
The smallest subspace is the zero vector.
The empty set is not a subspace.

Ex 3: Let . Show that H is a subspace of R³.

Ex 4: Is a subspace of R2?

A Shortcut for Determining Subspaces

Theorem 1: If v₁, …, vp are in a vector space V, then Span{ v₁, …, vp } is a subspace of V.

Proof: In order to verify this, check properties a, b and c of definition of a subspace.

a. 0 is in Span{ v₁, …, vp } since 0 = ____v₁ + _____v₂ + ⋯ + _____ vp

b. To show that Span{ v₁, …, vp } is closed under vector addition, we choose two arbitrary vectors in Span{ v₁, …, vp }:
u = a₁v₁+ a₂v₂ + ⋯ + anvn and v = b₁v₁+ b₂v₂ + ⋯ + bnvn Then

u + v = (__________________________) + (__________________________) =

(_____v₁+ _____v₁) + (_____v2+ _____v2) + ⋯ + (_____vp+ _____vp)

So u+v is in Span{ v₁, …, vp }

c. To show that Span{ v₁, …, vp } closed under scalar multiplication, choose an arbitrary number c and an arbitrary vector in Span{
v₁, …, vp }, v = b₁v₁+ b₂v₂ + ⋯ + bnvn

Then cv = c( b₁v₁+ b₂v₂ + ⋯ + bnvn ) = _____v1 + _____v2 + … + _____vp


So cv is in Span{ v₁, …, vp },
Since properties a, b and c hold, Span{ v₁, …, vp }, is a subspace of V.

Recap

1.To show that H is a subspace of a vector space, use Theorem 1.


2.To show that a set is not a subspace of a vector space, provide a specific example showing that at least one of the axioms a, b or c
(from the definition of a subspace) is violated.

Ex 5: Is V={(a + 2b, 2a - 3b): a and b are real} a subspace of R²? Why or why not?

Ex 6: Is a subspace of R³? Why or why not?

Ex 7: Is the set H of all matrices of the form a subspace of M2x2 Explain.



4.2 Null Spaces, Column Spaces & Linear Transformations
The null space of an m x n matrix A, written as Nul A, is the set of all solutions to the homogeneous equation Ax = 0.

Nul A = {x: x is in Rn and Ax = 0}

Theorem 2​
​ The null space of an m x n matrix A is a subspace of Rn. Equivalently, the set of all solutions to a system Ax = 0
of m homogeneous linear equations in n unknowns is a subspace of Rn.

Proof: Nul A is a subset of Rn since A has n columns. Must verify properties a, b, and c of the definition of a subspace.

Solving Ax = 0 yields an explicit description of Nul A.

Ex 1: Find an explicit description of Nul A where A =

Observations:
1.​ Spanning set of Nul A, found using the method in the last example, is automatically linear independent:

2.​ If , then the number of vectors in the spanning set for Nul A equals the number of free
variables in Ax = 0.

The column space of an m x n matrix A (Col A) is the set of all linear combinations of the columns of A.

If A = [a1 an], then Col A = Span{a1, , an}

Theorem 3: The column space of an m x n matrix A is a subspace of Rm.

Why? (Theorem 1, page 194)

Recall that if Ax = b, then b is a linear combination of A. Therefore,Col A = {b: b = Ax for some x in Rn}

Ex 2: Find a matrix A such that W = Col A where

The column space of an m x n matrix A is all of Rm if and only if the equation Ax = b has a solution for each b in Rm.
Why?
Ex 3: Is w in Nul A?​ ​

Ex 4: Is w in Col A?

The Contrast Between Nul A and Col A

Ex 5: Let
(a)​ The column space of A is a subspace of Rk where k = _______________.

(b)​ The null space of A is a subspace of Rk where k = ________________.

(c)​ Find a nonzero vector in Col A. (There are infinitely many possibilities.)

(d)​ Find a nonzero vector in Nul A. Solve Ax = 0 and pick one solution.

​ ​ ​ ​ x1 = -2x2, x2 is free, x3 = 0

Let x2 = __________ and then x = =

Contrast Between Nul A and Col A for an m x n matrix A (see page 204)
​ ​ ​ Nul A​ ​ ​ ​ ​ ​ ​ Col A
1. Subspace of Rn​ ​ ​ ​ ​ 1. Subspace of Rm
2. Implicit​ ​ ​ ​ ​ ​ 2. Explicit
3. Takes time to find vectors in Nul A​ ​ ​ 3. Easy to find vectors in Col A
4. No obvious relationship between Nul A​ ​ 4. Obvious relationship between Col A and entries in A
and the entries in A​ ​ ​ ​ ​ ​ ​
5. Av = 0, v in Nul A​ ​ ​ ​ ​ 5. Ax = v consistent, v in Col A​
6. Easy to tell if v is in Nul A, compute Av​ ​ 6. May take time to tell if v is in Col A, Do [A v]
7. Nul A = {0} iff Ax = 0 has only trivial solution​ 7. Col A = Rm iff Ax = b has a solution for every b in Rm
8. Nul A = {0} iff linear transformation ​ ​ 8. Col A = Rm iff linear transformation
is one-to-one​​ ​ ​ ​ ​ maps Rn onto Rm

Review:
A subspace of a vector space V is a subset H of V that has three properties:
a.​ The zero vector of V is in H.
b.​ For each u and v in H, u + v is in H.
​ ​ (closed under addition)
c.​ For each u in H and each scalar c, cu is in H.
​ ​ (closed under scalar multiplication)
If the subset H satisfies these three properties, then H itself is a vector space.

Theorem 1, 2, and 3 (Sections 4.1 & 4.2)


If v1, , vp are in a vector space V, then Span {v1, , vp} is a subspace of V.
The null space of an m x n matrix A is a subspace of Rn.
The column space of an m x n matrix A is a subspace of Rm.

Ex 6: Determine whether each of the following sets is a vector space or provide a counterexample.

(a)

(b)

(c)

Kernel and Range of a Linear Transformation

A linear transformation T from a vector space V into a vector space W is a rule that assigns to each x in V a unique vector
T(x) in W, such that
​ i. T(u + v) = T(u) + T(v) for all u,v in V
​ ii. T(cu) = cT(u) for all u in V and all scalars c.

The kernel (or null space) of T is the set of all vectors u in V such that T(u) = 0.

The range of T is the set of all vectors in W of the form T(u) where u is in V.

So if T(x) = Ax, Col A = range of T


Ex 7: ​ ​ ​ What form do the vectors in ker L look like?

Ex 8: mapping to .

Is (0, 0, 2) in ker L?

Is (2, -3, 4) in ker L?

Ex 9: Let A = and w =

Is w in Nul A?

Is w in Col A?

4.3​ Linearly Independent Sets

A set of vectors { v₁, …, vp } in a vector space V is said to be linearly independent if the vector equation
c1 v₁+ c2 v2 + …+ cp vp = 0 has only the trivial solution c₁=0,…,cp =0.

The set { v₁, …, vp } is said to be linearly dependent if there exists weights c₁=0,…,cp, not all 0, such that
c1 v₁+ c2 v2 + …+ cp vp = 0

The following results from Section 1.7 are still true for more general vectors spaces.

A set containing the zero vector is linearly dependent.

A set of two vectors is linearly dependent if and only if one is a multiple of the other.

Ex 1: Determine if is a linearly dependent or independent set.


Ex 2: Determine if is a linearly dependent or independent set.

Theorem 4: An indexed set { v₁, …, vp } of two or more vectors, with vj ≠ 0, is linearly dependent if and only if some vector vj (j >1)
is a linear combination of the preceding vectors v₁, …, vj-1

Ex 3: Let {p₁, p₂, p₃} be a set of vectors in P₂ where p₁(t) = t, p₂(t) = t², and p₃(t) = 4t + 2t². Is this a linearly dependent set?

A Basis Set Let H be the plane illustrated below. Which of the following are valid descriptions of H?

(a) H=Span{v₁,v₂}
(b) H=Span{v₁,v₃}
​ (c) H=Span{v₂,v₃}
(d) H=Span{v₁,v₂,v₃}

A basis set is an "efficient" spanning set containing no unnecessary vectors. In this case, we would consider the linearly independent
sets {v₁,v₂} and {v₁,v₃} to both be examples of basis sets or bases (plural for basis) for H.

Definition: Let H be a subspace of a vector space V. An indexed set of vectors β={b₁,…,bp} in V is a basis for H if

(i) β is a linearly independent set, and

(ii) H= Span{ b₁,…,bp }.

Ex 4: Let e₁= , e2= , e3= . Show that {e₁,e₂,e₃} is a basis for R³. The set {e₁,e₂,e₃} is called a standard basis for R³.
(Hint: Use the IMT)

Ex 5: Let S = {1, t, t², …,tⁿ}. Show that S is a basis for Pn.


Ex 6: Let v₁= , v2= , v3= . Is {v₁,v₂,v₃} a basis for R³?

Ex 7: Explain why each of the following sets is not a basis for R³.

(a) ​ ​ ​ ​ (b)

Bases for Nul A

Ex 8 Find a basis for Nul A where . Hint: Row reduce [A 0]

In this example, the set of three vectors {v1, v2, v3} is a spanning set for Nul A. In the last section we observed that this set is linearly
independent. Therefore {v1, v2, v3} is a basis for Nul A. The technique used here always provides a linearly independent set.

The Spanning Set Theorem: Informally, this theorem says a basis can be constructed from a spanning set of vectors by discarding
vectors which are linear combinations of preceding vectors in the indexed set.

Ex 9: Find a basis for the set of vectors

{v1, v2, v3} if v1 = , v2 = , v3 =

Theorem 5 (The Spanning Set Theorem): Let S={ v₁, …, vp } be a set in V and let H= Span{ v₁, …, vp }
a. If one of the vectors in S, say vk, is a linear combination of the remaining vectors in S, then the set formed from S by removing vk
still spans H.
b. If H ≠ {0}, some subset of S is a basis for H.

Bases for Col A

Ex 10: Find a basis for Col A, where A=[a₁ a₂ a₃ a₄] =


Elementary row operations on a matrix do not affect the linear dependence relations among the columns of the matrix.

Theorem 6: The pivot columns of a matrix A form a basis for Col A.

Ex 11: Let v₁= , v2= , v3= . Find a basis for Span{v₁,v₂,v₃} .

Review:

1. To find a basis for Nul A, use elementary row operations to transform [A 0] to an equivalent reduced row echelon form [B 0]. Use
the reduced row echelon form to find parametric form of the general solution to Ax=0. The vectors found in this parametric form of
the general solution form a basis for Nul A.
2. A basis for Col A is formed from the pivot columns of A. Warning: Use the pivot columns of A, not the pivot columns of B, where
B is in reduced echelon form and is row equivalent to A.

4.4 Coordinate Systems

In general, people are more comfortable working with the vector space Rⁿ and its subspaces than with other types of vectors spaces
and subspaces. The goal here is to impose coordinate systems on vector spaces, even if they are not in Rⁿ.

Theorem 7 The Unique Representation Theorem: Let β={b₁,…, bn} be a basis for a vector space V. Then for each x in V, there
exists a unique set of scalars c₁,…,cn such that x = c₁b₁ + ⋯ + cnbn

Definition: Suppose β={b₁,…, bn} is a basis for a vector space V and x is in V. The coordinates of x relative to the basis β (or the β-
coordinates of x) are the weights c₁,…,cn such that x = c₁b₁ + ⋯ + cnbn

In this case, the vector in Rⁿ [x]β = is called the coordinate vector of x (relative to β), or the β- coordinate vector of x

Ex 1: Let β={b₁,b₂} where b₁= and b₂= and let E={e₁,e₂} where e₁=

and e₂= . Express [x]β = in terms of the E={e₁,e₂}; i.e., find [x]E.
Geometrically, we are changing from using basis vectors <3,1> and <0,1> to standard basis vectors <1, 0> and <0, 1>. ​ ​

​ ​ ​ ​ ​ ​ ​ ​
Standard graph paper​ ​ β- graph paper​

From the last example,

For a basis β = {b₁,. . .,bn} let Pβ = [b₁,. . .,bn] and [x]β = Then x = Pβ [x]β
We call Pβ the change-of-coordinates matrix from β to the standard basis in Rⁿ. Then [x]β = Pβ-1 x

and therefore Pβ-1 is a change-of-coordinates matrix from the standard basis in Rⁿ to the basis β.

Ex 2: Let β={b₁,b₂} where b₁= , b₂= and x = . Find the change-of-coordinates matrix Pβ from β to the standard basis
in R² and change-of-coordinates matrix Pβ-1 from the standard basis in R² to β.

Ex 3: Find the vector x determined by the given coordinate vector [x]β and the given basis β.

β= , [x]β =

Ex 4: Find the coordinate vector [x]β of x relative to the given basis β = {b₁,. . .,bn} b1 = , b2 = ,x =

Coordinate mappings allow us to introduce coordinate systems for unfamiliar vector spaces.
Standard basis for P₂:{p₁, p₂,p₃}= {1, t, t²}

Polynomials in P₂ behave like vectors in R³.

Since a + bt + ct² = ap₁+ bp₂+ cp₃, [a + bt + ct²]β =


We say that the vector space R³ is isomorphic to P₂.

Parallel Worlds of R³ and P₂

Vector Space R³​ ​ ​ ​ ​ Vector Space P₂

Vector Form:​ ​ ​ ​ ​ ​ Vector Form: a + bt + ct²


Vector Addition Example ​ ​ ​ ​ Vector Addition Example

​ ​ ​ ​ ​ ​ (-1 + 2t - 3t²) + (2 + 3t + 5t²) =1 + 5t + 2t²

Informally, we say that vector space V is isomorphic to W if every vector space calculation in V is accurately reproduced in W, and
vice versa.

Assume β is a basis set for vector space V. Exercise 25 (page 223) shows that a set {u₁,u₂,…,up} in V is linearly independent if and
only if {[u₁]β,[u₂]β,…,[up] β} is linearly independent in Rⁿ.

Ex 5: Use coordinate vectors to determine if {p₁, p₂, p₃} is a linearly independent set, where p₁= 1 – t , p₂= 2 – t + t², and p₃= 2t +
3t².

Ex 6: Determine whether the set of polynomials forms a basis for P3. {5 – 3t + 4t2 + 2t3, 9 + t + 8t2 – 6t3,
6 – 2t + 5t2, t3}

Coordinate vectors also allow us to associate vector spaces with subspaces of other vectors spaces.
Ex 7: Let β={b₁,b₂} where b₁= ,b₂= and let H=span{b₁,b₂}. Find [x]β, if x=

4.5 The Dimension of a Vector Space

Theorem 9: If a vector space V has a basis β = {b1, . . ., bn}, then any set in V containing more than n vectors must be linearly
dependent.

Proof: Suppose is a set of vectors in V where p > n. Then the coordinate vectors are in . Since p >

n, are linearly dependent and therefore are linearly dependent.

Theorem 10: If a vector space V has a basis of n vectors, then every basis of V must consist of n vectors.

Proof: Suppose is a basis for V consisting of exactly n vectors. Now suppose is any other basis for V. By definition of a
basis, we know that and are both linearly independent sets.

By Theorem 9, if has more vectors than , then _______ is a linearly dependent set (which cannot be the case).

Again by Theorem 9, if has more vectors than , then _________ is a linearly dependent set (which cannot be the case).

Therefore has exactly n vectors also.

Definition – If V is spanned by a finite set, then V is said to be finite-dimensional, and the dimension of V, written as dim V, is the
number of vectors in a basis for V. The dimension of the zero vector space {0} is defined to be 0. If V is not spanned by a finite set,
then V is said to be infinite-dimensional.

Ex 1: The standard basis for P3 is {_____________}. So dim = ___________. In general, dim = n + 1.

The standard basis for is where are the columns of . So, for example, dim = 3.

Ex 2: Find a basis and the dimension of the subspace


Dimensions of subspaces of

0-dimensional subspace contains only the zero vector


1-dimensional subspaces. Span {v} where is in . These subspaces are __________ through the origin.

2-dimensional subspaces. Span {u, v} where u and v are in R3 and are not multiples of each other. These subspaces are

_______________ through the origin.

3-dimensional subspaces. Span {u, v, w} where u, v, w are linearly independent vectors in . This subspace is itself because
the columns of A = [u v w] span according to the IMT.

Theorem 11: Let H be a subspace of a finite-dimensional vector space V. Any linearly independent set in H can be expanded, if
necessary, to a basis for H. Also, H is finite-dimensional and .

Let . Then H is a subspace of and dim H < dim . We could expand the spanning set to

to form a basis for

Theorem 12--The Basis Theorem: Let V be a p-dimensional vector space, . Any linearly independent set of exactly p vectors in
V is automatically a basis for V. Any set of exactly p vectors that spans V is automatically a basis for V.

Ex 3: Show that is a basis for P2.

Dimensions of Col A and Nul A: Recall our techniques to find basis sets for column spaces and null spaces.

Ex 4: Suppose . Find dim Col A and dim Nul A.


dim Col A = number of pivot columns of A

dim Nul A = number of free variables of A

4.6 Rank

The set of all linear combinations of the row vectors of a matrix A is called the row space of A and is denoted by Row A.

For example, let and r1 = (-1, 2, 3, 6), r2 = (2, -5, -6, -12), r3 = (1, -3, -3, -6)
Row A = Span{ r1 , r2 , r3 }(a subspace of R4)

While it is natural to express row vectors horizontally, they can also be written as column vectors if it is more convenient.

Therefore, Col AT = Row A

When we use row operations to reduce matrix A to matrix B, we are taking linear combinations of the rows of A to come up with B.
We could reverse this process and use row operations on B to get back to A. Because of this, the row space of A equals the row space
of B.

Theorem 13: If two matrices A and B are row equivalent, then their row spaces are the same. If B is in echelon form, the nonzero rows
of B form a basis for the row space of A as well as B.

Ex 1: The matrices are row equivalent. Find a basis for row space, column space
and null space of A. Also state the dimension of each.

Note the following:

dim Col A = # of pivots of A = # of nonzero rows in B = dim Row A


dim Nul A = # of free variables = # of nonpivot columns of A

Definition: The rank of A is the dimension of the column space of A.

rank A = dim Col A = # of pivot columns of A = dim Row A

rank A​ ​ +​ dim Nul A​ ​ =​ ​ n

# of pivot​ ​ # of nonpivot​ ​ ​ ​ # of
columns ​ ​ columns​​ ​ ​ ​ columns
of A​ ​ ​ of A​ ​ ​ ​ ​ of A

Theorem 14 The Rank Theorem: The dimensions of the column space and the row space of an m x n matrix A are equal. This
common dimension, the rank of A, also equals the number of pivot positions in A and satisfies the equation rank A + dim Nul A = n

Since , rank A = rank .

Ex 2: Suppose that a 5 x 8 matrix A has rank 5. Find dim Nul A, dim Row A and rank . Is Col A = ?

Ex 3: For a 9 x 12 matrix A, find the smallest possible value of dim Nul A.

Ex 4: A scientist solves a homogeneous system of 50 equations in 54 variables and finds that exactly 4 of the unknowns are free
variables. Can the scientist be certain that any associated nonhomogeneous system (with the same coefficients) has a solution?

THE INVERTIBLE MATRIX THEOREM (continued)


Let A be a square n x n matrix. Then the following statements are equivalent:
m.​ The columns of A form a basis for
n.​ Col A =
o.​ dim Col A = n
p.​ rank A = n
q.​ Nul A = {0}
r.​ dim Nul A = 0

Ex 5: If a 6x3 matrix A has rank 3, find dim Nul A, dim Row A and rank AT.

Ex 6: Suppose a 5 x 6 matrix A has 4 pivot columns

a) What is dim Nul A?​ ​ ​ ​ b)Is Col A in R4?

Ex 7: If the null space of a 7 x 6 matrix A is 5 dimensional, what is the dimension of the column space of A?
Ex 8: If the Null Space of a 5 x 6 matrix A is 4-dimensional, what is the dimension of the row space of A?

Ex 9: If A is a 4 x 3 matrix, what is the largest possible dimension of the row space of A?

Ex 10: If A is a 6 x 4 matrix, what is the smallest possible dimension of Nul A?

4.7 Change of Basis


When a basis β is chosen for an n-dimensional vector space V, the associated coordinate mapping onto Rn provides a coordinate
system for V. Each x in V is identified uniquely by its β-coordinate vector [x]β.
Sometimes, a problem is described initially using a basis β, but the problem’s solution is aided by changing β to a new basis C. Each
vector is assigned a new C-coordinate vector.
In this section, we study how [x]C. and [x]β. are related for each x in V.

Theorem 15: Let B = {b1, . . . , bn} and C = {c1, . . . , cn} be bases of a vector space V. Then there is a uniquen x n matrix such

that [x]C = [x]B.

The columns of are the C-coordinate vectors of the vectors in the basis B. That is, = [ [b1]C [b2]C ... [bn]C ]

The matrix is called the change-of- coordinates matrix from B to C.

Multiplication by converts B-coordinates into C-coordinates

Ex 1: Consider two bases B = {b1, b2} and C = {c1, c2} for a vector space V
such that b1 = -c1 + 4c2 and b2 = 5c1 - 3c2

(a)​ Find the change-of-coordinate matrix from B to C.

(b)​ Find [x]C for x = 5b1 + 3b2

The columns of are linearly independent because they are the coordinate vectors of the linearly independent set B.
Since is square, it must be invertible, by the IMT.

is the matrix that converts C-coordinates into B-coordinates​


Change of Basis in Rn: If B = {b1, . . . , bn} and E is the standard basis {e1, . . . , en} in Rn, then [b1]E = b1 and likewise for the other
vectors in B.

is the same as the change-of-coordinates matrix (Section 4.4)

To change coordinates between two nonstandard bases in Rn, we need Theorem 15. The theorem shows that to solve the
change-of-basis problem, we need the coordinate vectors of the old basis relative to the new basis.

In Ex 2 and 3, let B = {b1, b2} and C = {c1, c2} be bases for R2. Find the change-of-coordinates matrix from B to C and the

change-of-coordinates matrix from C to B. (Hint: For , row reduce [c1 c2 b1 b2] and

Ex 2: b1= , b2= , c1= , c2= ,

Ex 3: b1= , b2= , c1= , c2= ,

Note: or

Ex 4: In P2, find the change-of-coordinates matrix from the basis to the standard basis. Then write
as a linear combination of the polynomials in B.

This diagram provides another way of viewing the change-of-coordinates:

B-coordinates Standard​ C-coordinates Coordinates

4.9 Applications to Markov Chains


Suppose a physical or mathematical system undergoes a process of change such that at any moment it can be in one of a finite number
of states. For example, the weather could be sunny, cloudy, or rainy. Suppose that such a system changes with time from one state to
another and at scheduled times the state of the system is observed.

If the probability of a given state can be predicted by just knowing the state of the system at the preceding observation, then the
process of change is called a Markov chain or Markov process.

Definition: If a Markov chain has k possible states, which we label as 1, 2, . . . k, then the probability that the system is in state I at
any observation after it was in state j at the preceding observation is denoted by pij and is called the transition probability from state j
to state i. The matrix P = [pij] is called the transition matrix of the Markov chain.

These chains are an expansion of the migration matrices we studied earlier.

For example, in a three state Markov chain, the transition matrix has the form
​ ​ ​ ​ ​ ​
​ ​ ​ ​ ​ ​ Preceding State
​ ​ ​ ​ ​ ​ 1 2 3

​ ​ ​ ​ ​ ​

There is a second important matrix in the use of Markov chains, the state vector.

Definition: The state vector for an observation of a Markov chain with k states is a column vector x whose ith component xi is the
probability that the system is in the ith state at that time.

Consider the following example: Rent-a-Lemon has three locations from which to rent a car for one day: airport, downtown and the
valley

Daily Migration:
​ ​ ​ Rented From
Airport​ ​ Downtown​ Valley Returned to
.95​ ​ .02​ ​ .05 ​ Airport
.03​ ​ .90​ ​ .05 Downtown
.02​ ​ .08​ ​ .90 Valley

M= (migration matrix)

Note that the probabilities in each column total one. A matrix with this property is called a stochastic matrix.

The initial state vector for this situation is

If we wish to find the distribution of cars after one day, we can find
In general,

So, the distribution after two days =

​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​

(long term distribution)

x= is called a steady state vector

since x = Mx

Ex 1: By reviewing its donation records, the alumni office of a college finds that 80% of its alumni who contribute to the annual fund
one year will also contribute the next year, and 30% of those who do not contribute one year will contribute the next. Construct a
transition matrix and initial state matrix illustrating this situation. Find the contributions for each of the first five years. Continue until
you find the steady state vector.

There is process that can be used to find the steady state vector so that it is not necessary to use trial and error.

Finding the Steady State Vector

We know that in the long run


Mx = x
So,
Mx = Ix
Subtracting, we get,

Mx – Ix = 0

Factoring, we get
(M – I)x = 0

Solve (M – I)x = 0 to find the steady state vector.

Note that the solution x must be a probability vector.

Ex 2: Find the steady state vectors for each of the following transition matrices

a. ​ ​

​ ​

b.
Ex 3: Suppose that 3% of the population of the U.S. lives in the State of Washington. Suppose the migration of the population into
and out of Washington State will be constant for many years according to the following migration probabilities. What percentage of
the total U.S. population will eventually live in Washington?
From
​ WA​ ​ Rest of U.S.​ ​ To:
0.9​ 0.01​ ​ WA
0.10​ 0.99​ ​ Rest of U. S.

You might also like