0% found this document useful (0 votes)
10 views

Vecto

Uploaded by

emmaayindoo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Vecto

Uploaded by

emmaayindoo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Vectors

Vectors are mathematical objects that represent quantities with both magnitude
and direction. They can be represented as ordered lists of numbers, called
components, or as geometric entities with arrows indicating direction and length.

Vector Spaces
A vector space is a set of vectors equipped with two operations: vector addition
and scalar multiplication. These operations must satisfy certain properties, such
as closure under addition and scalar multiplication, associativity, commutativity
of addition, the existence of an additive identity, and the existence of additive
inverses.

Subspaces
A subspace of a vector space is a subset that is itself a vector space. It must
satisfy the vector space axioms and is closed under vector addition and scalar
multiplication.

Vector Space Axioms


Vector space axioms are the set of properties that a set of vectors and associated
operations must satisfy to be considered a vector space. These axioms include
closure under addition and scalar multiplication, associativity, commutativity
of addition, existence of an additive identity, and existence of additive inverses.

Linear Independence of Vectors


A set of vectors is linearly independent if no vector in the set can be represented
as a linear combination of the others. In other words, no vector in the set is
redundant.

1
Basis and Dimension of a Vector Space
A basis for a vector space is a linearly independent set of vectors that spans the
entire space. The dimension of a vector space is the number of vectors in any
basis for that space.

Orthogonal Vectors and Subspaces


Vectors are orthogonal if their dot product is zero. An orthogonal basis is a set
of vectors where every pair of vectors is orthogonal. An orthogonal subspace is
a subspace in which all vectors are orthogonal to each other.

Gram-Schmidt Orthogonalization
Gram-Schmidt orthogonalization is a method to transform a linearly indepen-
dent set of vectors into an orthogonal (or orthonormal) set. This process involves
iteratively creating orthogonal vectors by subtracting off the projections of the
already constructed orthogonal vectors. It is often used to find an orthogonal
basis for a subspace.

Common Ways of Representing Vectors

As Column or Row Matrices:


 
v1
 
 v2 
 
v=
 .. 
 (Column vector)
.
 
vn

h i
v = v1 v2 ... vn (Row vector)

2
In Component Form:

v = ⟨v1 , v2 , . . . , vn ⟩ or v = (v1 , v2 , . . . , vn )

Using Unit Vectors:

v = v1 i + v2 j + . . . + vn k (In 3D space, using unit vectors)

In Parametric Form:
 
x
v =   = xi + yj (In 2D space, for example)
y

As Coordinates in a Coordinate System:

v = (x, y, z) in 3D Cartesian coordinates.

Using Magnitude and Direction:

v = |v|·v∧ where |v| is the magnitude and v∧ is the unit vector in the direction of v.

As a Linear Combination:

v = c1 v1 + c2 v2 + . . . + cn vn where c1 , c2 , . . . , cn are scalars.

Using Function Notation:


 
f (t)
v(t) =   (For vector-valued functions)
g(t)

In Polar Coordinates:

v = (r, θ) where r is the magnitude and θ is the angle with respect to a reference axis.

Span of a Set of Vectors


The span of a set of vectors in a vector space is the set of all possible linear
combinations of those vectors. In other words, the span is the collection of all

3
vectors that can be formed by scaling each vector in the set and adding them
together. Mathematically, if V is a vector space and {v1 , v2 , . . . , vn } is a set
of vectors in V , then the span of this set, denoted as span{v1 , v2 , . . . , vn }, is
defined as:
( n
)
X
span{v1 , v2 , . . . , vn } = ci vi | c1 , c2 , . . . , cn are scalars
i=1

In simpler terms, the span consists of all possible linear combinations where
each ci is a scalar and vi is a vector from the given set. The span is a subspace
of the original vector space and includes all vectors that can be reached by
combining the vectors in the set through scalar multiplication and addition. If
the span of a set of vectors is the entire vector space, then the set is said to be
a spanning set for that space.

Examples for Each Concept

Vectors

Example 1:
 
2
 
In 3-dimensional space (R3 ), a vector could be represented as v = −3.
 
 
1

Example 2:

In physics, velocity can be represented as a vector, e.g., v = 10i − 5j, where i


and j are unit vectors in the x and y directions.

Vector Spaces

Example 1:

The set of all 2 × 2 matrices with real number entries (R2×2 ) is a vector space.

4
Example 2:

The set of all polynomials of degree at most 3 with real coefficients is a vector
space.

Subspaces

Example 1:

In R3 , the xy-plane {(x, y, 0) | x, y ∈ R} is a subspace.

Example 2:

In R4 , the set of all 4 × 1 column vectors whose entries sum to zero forms a
subspace.

Vector Space Axioms

Example 1:

Associativity of addition - For any vectors u, v, w in a vector space, (u+v)+w =


u + (v + w).

Example 2:

Existence of an additive identity - There exists a vector 0 such that v + 0 = v


for any vector v.

Linear Independence of Vectors

Example 1:
   
1 0
   
In R3 , the vectors v1 = 0 and v2 = 1 are linearly independent.
   
   
0 0

5
Example 2:
   
2 −1
The vectors u =   and v =   are linearly independent in R2 .
1 3

Basis and Dimension of a Vector Space

Example 1:

In R3 , the standard basis {i, j, k} forms a basis, and the dimension is 3.

Example 2:

The polynomials {1, x, x2 } form a basis for the vector space of polynomials of
degree at most 2, and the dimension is 3.

Orthogonal Vectors and Subspaces

Example 1:
   
3 −4
In R2 , vectors u =   and v =   are orthogonal.
4 3

Example 2:

The set of all 2×2 symmetric matrices forms a subspace of R2×2 , and orthogonal
matrices form a subspace within that.

Gram-Schmidt Orthogonalization

Example:
     
1 1 2
     
Consider the vectors v1 = 0, v2 = 1, and v3 =  1 . Applying Gram-
     
     
1 1 −1
Schmidt process would produce an orthogonal set.

6
Examples of Vector Spanning in Vector Spaces

Example 1:
   
1 0
Consider the vectors v1 =   and v2 =   in R2 , the Euclidean plane. These
0 1
vectors are linearly independent, and their span is the entire R2 .
Explanation:  
x
Any vector v =   in R2 can be expressed as v = xv1 + yv2 , which is a
y
linear combination of v1 and v2 . The span of {v1 , v2 } covers all possible vectors
in R2 , demonstrating that these vectors span the entire space.

Example 2:
   
1 −1
   
Consider the vectors u = 2 and v =  0  in R3 . These vectors are linearly
   
   
1 1
independent, and their span forms a subspace of R3 .
Explanation:  
a
 
Any vector w =  b  in the span of {u, v} can be expressed as w = au + bv.
 
 
c
The span of {u, v} consists of all possible linear combinations of u and v. This
span is a plane in R3 passing through the origin.
The span of these vectors does not cover all of R3 , but it forms a subspace
(a plane) within R3 .
In both examples, the concept of span illustrates how a set of vectors can
generate a space by forming linear combinations of those vectors.

Unit Vector
A unit vector is a vector that has a magnitude of 1. In other words, a unit
vector is a vector with a length (magnitude) of 1 and is often used to represent

7
direction. If v is a vector, its corresponding unit vector is denoted by v̂, and it
is given by:
v
v̂ =
∥v∥

Here, ∥v∥ represents the magnitude (length) of the vector v. Unit vectors are
essential in many mathematical and physical applications, providing a way to
express direction without concern for scale.

Normal Vector
The normal vector to a surface at a given point is a vector that is perpendicular
(or orthogonal) to that surface at that point. It is also referred to as the ”normal
vector” or ”surface normal.” The normal vector is often denoted by N or n.
In the context of a plane defined by the 
equation
 Ax + By + Cz = D, the
A
 
coefficients A, B, and C form a vector N = B . The direction of this vector
 
 
C
is perpendicular to the plane.

Question 1:
 
3
Consider the vector v =  . Find the unit vector in the same direction as
−4
v.

Solution 1:

The unit vector v̂ in the direction of v is given by:

v
v̂ =
∥v∥

8
where ∥v∥ represents the magnitude of v.

p
∥v∥ = 32 + (−4)2 = 5

Now, calculate the unit vector:


   
1  3 0.6
v̂ = = 
5 −4 −0.8

 
0.6
So, the unit vector in the direction of v is v̂ =  .
−0.8

Question 2:
Consider the plane 2x − 3y + z = 5. Find a unit normal vector to this plane.

Solution 2:
 
2
 
The equation of the plane 2x − 3y + z = 5 can be expressed as N = −3.
 
 
1
Now, calculate the unit normal vector:

p √
∥N∥ = 22 + (−3)2 + 12 = 14

   
2 2
N 1   1  
N̂ = = √ −3 = √ −3
   
∥N∥ 14   14  
1 1
 
2
 
So, a unit normal vector to the plane 2x − 3y + z = 5 is N̂ = √1 −3.
 
14  
1

9
Projection of a Vector onto Another Vector
The projection of a vector onto another vector (or onto a subspace) is a mea-
sure of how much of one vector lies in the direction of another. It essentially
represents the shadow of one vector onto the direction of another.
Given two vectors v and u, the projection of v onto u is denoted as proju (v),
and it is calculated using the following formula:

v·u
proju (v) = ·u
∥u∥2

Here,

• v · u is the dot product of vectors v and u.

• ∥u∥2 is the squared magnitude of vector u.

• v·u
∥u∥2 · u represents the vector projection of v onto u.

The resulting projection vector lies along the direction of u and represents
the part of v that is in the same direction as u.

Gram-Schmidt Orthogonalization

Question 1

Consider the vectors v1 = [1, 1] and v2 = [2, −1]. Apply the Gram-Schmidt
process to orthogonalize these vectors.

Solution 1

Initialize:
u1 = v1 = [1, 1]

10
Orthogonalize v2 :
v2 · u1
proju1 (v2 ) = · u1
∥u1 ∥2
1
= [1, 1]
2
= [0.5, 0.5]

u2 = v2 − proju1 (v2 )

= [2, −1] − [0.5, 0.5]

= [1.5, −1.5]

Normalize:
u1 1
e1 = = √ [1, 1] = [0.707, 0.707]
∥u1 ∥ 2
u2 1
e2 = = √ [1.5, −1.5] = [0.354, −0.354]
∥u2 ∥ 3 2
So, the orthogonalized set is {e1 , e2 } = {[0.707, 0.707], [0.354, −0.354]}.

Question 2

Given vectors v1 = [3, 4] and v2 = [−2, 5], apply the Gram-Schmidt process to
orthogonalize these vectors.

Solution 2

Follow the steps from the Gram-Schmidt process:


Initialize:
u1 = v1 = [3, 4]

Orthogonalize v2 :
v2 · u1
proju1 (v2 ) = · u1
∥u1 ∥2
7
= [3, 4]
25
= [1.68, 2.24]

11
u2 = v2 − proju1 (v2 )

= [−2, 5] − [1.68, 2.24]

= [−3.68, 2.76]

Normalize:
u1 1
e1 = = [3, 4] = [0.6, 0.8]
∥u1 ∥ 5
u2 1
e2 = = √ [−3.68, 2.76] = [−0.431, 0.902]
∥u2 ∥ 73
So, the orthogonalized set is {e1 , e2 } = {[0.6, 0.8], [−0.431, 0.902]}.

Question 3

Given vectors v1 = [2, −1, 1] and v2 = [1, 2, −1], apply the Gram-Schmidt
process to orthogonalize these vectors.

Solution 3

Follow the steps from the Gram-Schmidt process:


Initialize:
u1 = v1 = [2, −1, 1]

Orthogonalize v2 :
v2 · u1
proju1 (v2 ) = · u1
∥u1 ∥2
1
= [2, −1, 1]
2
= [1, −0.5, 0.5]

u2 = v2 − proju1 (v2 )

= [1, 2, −1] − [1, −0.5, 0.5]

= [0, 2.5, −1.5]

Normalize:

u1 1
e1 = = √ [2, −1, 1] = [0.816, −0.408, 0.408]
∥u1 ∥ 6

12
u2 1
e2 = = √ [0, 2.5, −1.5] = [0, 0.707, −0.707]
∥u2 ∥ 10
So, the orthogonalized set is {e1 , e2 } = {[0.816, −0.408, 0.408], [0, 0.707, −0.707]}.

Question 1
     
1 0 −2
     
Consider the vectors v1 =  2 , v2 = 3, and v3 =  1 . Apply the
     
     
−1 2 3
Gram-Schmidt process to orthogonalize these vectors.

Solution 1

Follow the steps from the 


Gram-Schmidt
 process:
1
 
Initialize: u1 = v1 =  2 
 
 
−1
Orthogonalize v2 : u2 = v2 − proju1 (v2 )
v2 ·u1
proju1 (v2 ) = ∥u 2 u1
  1 ∥   
0 1 −1
     
  6   
u2 = 3 − 6  2  =  1 
     
2 −1 3
Orthogonalize v 3 : u3 = v3 − proju1 (v3) − proj
 u2 (v3 )
1 −1
   
proju1 (v3 ) = − 65 11 
 2 , proju2 (v3 ) = 18  1 
  
   
−1 3
         
−2 1 −1 −19
         
u3 =  1  − − 56  2  −  11 = −1
         
1
   18   
    
   
3 −1 3 10
Normalize: ∥u 1 ∥ =
 6, ∥u2 ∥ = 18, ∥u3 ∥ =342  
1 −1 −19
     
u1 1 u2 1  u3 1 
e1 = =  2 , e2 = = 18  2 , e3 = = 342  −1 
   
∥u1 ∥ 6   ∥u2 ∥   ∥u3 ∥  
−1 3 10

13
So, the orthogonalized set is {e1 , e2 , e3 }.

Question 2
     
1 1 −1
     
Consider the vectors v1 = 0, v2 = 1, and v3 =  1 . Apply the Gram-
     
     
1 1 0
Schmidt process to orthogonalize these vectors.

Solution 2

Follow the steps from the 


Gram-Schmidt
 process:
1
 
Initialize: u1 = v1 = 0
 
 
1
Orthogonalize v2 : u2 = v2 − proju1 (v2 )
v2 ·u1
proju1 (v2 ) = ∥u 2 u1
  1 ∥  
1 1 0
     
  2   
u2 = 1 − 2 0 = 1
     
1 1 0
Orthogonalize v3 : u3 = v3 − proju1 (v3 ) − proju2 (v3 )
v3 ·u1
proju1 (v3 ) = ∥u 2 u1 , proj (v3 ) = v3 ·u22 u2
  1 ∥   u2  ∥u2 ∥ 
−1 1 0 −3
         2 
u3 =  1  −  12 0 −  01 1 =  12 
         
         
0 1 0 −1
q
Normalize: ∥u1 ∥ = 2, ∥u2 ∥ = 2, ∥u3 ∥ = 11 2
     
1 0 −3
     
u1 1   u2 1   u3 1
e1 = ∥u1 ∥ = 2 0, e2 = ∥u2 ∥ = 2 1, e3 = ∥u = √
 
3∥ 22  1 
 
   
1 0 −2
So, the orthogonalized set is {e1 , e2 , e3 }.

14
Vector Norms
The norm of a vector is a measure of its length or magnitude in a vector space.
The norm is denoted by ∥v∥ and is a non-negative scalar value. The concept of
the norm generalizes the notion of the magnitude of a vector in Euclidean space
(Rn ).

Euclidean Norm (L2 Norm)

For a vector v = [v1 , v2 , . . . , vn ], the Euclidean norm is defined as:

q
∥v∥2 = v12 + v22 + . . . + vn2

Manhattan Norm (L1 Norm)

The L1 norm is the sum of the absolute values of the vector components:

∥v∥1 = |v1 | + |v2 | + . . . + |vn |

Infinity Norm (L∞ Norm)

The L∞ norm is the maximum absolute value of any component of the vector:

∥v∥∞ = max(|v1 |, |v2 |, . . . , |vn |)

P-Norm

More generally, the p-norm is defined as:

1
∥v∥p = (|v1 |p + |v2 |p + . . . + |vn |p ) p , where p ≥ 1

The concept of the norm is crucial in various mathematical and compu-


tational applications, including optimization, linear algebra, machine learning,
and signal processing.

15

You might also like