topics of linear algebra
topics of linear algebra
Eigenval One of a set of special scalars associated with a linear system of equations that describes that system's fundamental
ue modes. An eigenvector is associated with each eigenvalue.
Eigenvec One of a special set of vectors associated with a linear system of equations. An eigenvalue is associated with each
tor eigenvector.
Euclidea The space of all n-tuples of real numbers. It is the generalization of the two dimensional plane and three dimensional
n Space space.
Inner (1) In a vector space, a way to multiply vectors together, with the result of this multiplication being a scalar. (2) A
Product synonym for dot product.
Linear
The study of linear systems of equations and their transformation properties.
Algebra
Linear
A function from one vector space to another. If bases are chosen for the vector spaces, a linear transformation can be
Transfor
given by a matrix.
mation
A concise and useful way of uniquely representing and working with linear transformations. In particular, for every linear
Matrix transformation, there exists exactly one corresponding matrix, and every matrix corresponds to a unique linear
transformation. The matrix is an extremely important concept in linear algebra.
Matrix
Given a matrix M, the inverse is a new matrix M-1 that when multiplied by M, gives the identity matrix.
Inverse
Matrix
The process of multiplying two matrices (each of which represents a linear transformation), which forms a new matrix
Multiplica
corresponding to the matrix representation of the two transformations' composition.
tion
Norm A quantity that describes the length, size, or extent of a mathematical object.
Vector A set that is closed under finite vector addition and scalar multiplication. The basic example is n-dimensional Euclidean
Space space.
List of matrices
Glossary of tensor theory.
Contents
[hide]
1Linear equations
2Matrices
3Matrix decompositions
4Relations
5Computations
6Vector spaces
7Structures
8Multilinear algebra
9Affine space and related topics
10Projective space
Linear equations[edit]
Matrices[edit]
2 × 2 real matrices
Matrix theory
Matrix addition
Matrix multiplication
Basis transformation matrix
Characteristic polynomial
Trace
Eigenvalue, eigenvector and eigenspace
Cayley–Hamilton theorem
Spread of a matrix
Jordan normal form
Weyr canonical form
Rank
Matrix inversion, invertible matrix
Pseudoinverse
Adjugate
Transpose
Dot product
Symmetric matrix
Orthogonal matrix
Skew-symmetric matrix
Conjugate transpose
Unitary matrix
Hermitian matrix, Antihermitian matrix
Positive-definite, positive-semidefinite matrix
Pfaffian
Projection
Spectral theorem
Perron–Frobenius theorem
List of matrices
Diagonal matrix, main diagonal
Diagonalizable matrix
Triangular matrix
Tridiagonal matrix
Block matrix
Sparse matrix
Hessenberg matrix
Hessian matrix
Vandermonde matrix
Stochastic matrix
Toeplitz matrix
Circulant matrix
Hankel matrix
(0,1)-matrix
Matrix decompositions[edit]
Cholesky decomposition
LU decomposition
QR decomposition
Polar decomposition
Spectral theorem
Singular value decomposition
Higher-order singular value decomposition
Schur decomposition
Schur complement
Haynsworth inertia additivity formula
A linear system in three variables determines a collection of planes. The intersection point is the solution.
In mathematics, a system of linear equations (or linear system) is a collection of two or more linear
equations involving the same set of variables.[1] For example,
since it makes all three equations valid. The word "system" indicates that the equations are to
be considered collectively, rather than individually.
In mathematics, the theory of linear systems is the basis and a fundamental part of linear
algebra, a subject which is used in most parts of modern mathematics.
Computational algorithms for finding the solutions are an important part of numerical linear
algebra, and play a prominent role in engineering, physics, chemistry, computer science,
and economics. A system of non-linear equations can often be approximated by a linear
system (see linearization), a helpful technique when making amathematical model or computer
simulation of a relatively complex system.
Very often, the coefficients of the equations are real or complex numbers and the solutions are
searched in the same set of numbers, but the theory and the algorithms apply for coefficients
and solutions in any field. For solutions in an integral domain like the ring of the integers, or in
other algebraic structures, other theories have been developed, see Linear equation over a
ring. Integer linear programming is a collection of methods for finding the "best" integer
solution (when there are many). Gröbner basis theory provides algorithms when coefficients
and unknowns are polynomials. Also tropical geometry is an example of linear algebra in a
more exotic structure.
Contents
[hide]
1Elementary example
2General form
o 2.1Vector equation
o 2.2Matrix equation
3Solution set
o 3.1Geometric interpretation
o 3.2General behavior
4Properties
o 4.1Independence
o 4.2Consistency
o 4.3Equivalence
5Solving a linear system
o 5.1Describing the solution
o 5.2Elimination of variables
o 5.3Row reduction
o 5.4Cramer's rule
o 5.5Matrix solution
o 5.6Other methods
6Homogeneous systems
o 6.1Solution set
o 6.2Relation to nonhomogeneous systems
7See also
8Notes
9References
o 9.1Textbooks
Elementary example[edit]
The simplest kind of linear system involves two equations and two variables:
One method for solving such a system is as follows. First, solve the top equation for in
terms of :
This results in a single equation involving only the variable . Solving gives , and
substituting this back into the equation for yields . This method generalizes to
systems with additional variables (see "elimination of variables" below, or the
article on elementary algebra.)
General form[edit]
A general system of m linear equations with n unknowns can be written as
Here are the unknowns, are the coefficients of the system, and are the
constant terms.
This allows all the language and theory of vector spaces (or more
generally, modules) to be brought to bear. For example, the collection of
all possible linear combinations of the vectors on the left-hand side is
called their span, and the equations have a solution just when the right-
hand vector is within that span. If every vector within that span has exactly
one expression as a linear combination of the given left-hand vectors,
then any solution is unique. In any event, the span has a basis oflinearly
independent vectors that do guarantee exactly one expression; and the
number of vectors in that basis (itsdimension) cannot be larger
than m or n, but it can be smaller. This is important because if we
have m independent vectors a solution is guaranteed regardless of the
right-hand side, and otherwise not guaranteed.
Matrix equation[edit]
The vector equation is equivalent to a matrix equation of the form
A solution of a linear system is an assignment of values to the variables x1, x2, ..., xnsuch that each of
the equations is satisfied. The set of all possible solutions is called the solution set.
For three variables, each linear equation determines a plane in three-dimensional space, and the
solution set is the intersection of these planes. Thus the solution set may be a plane, a line, a single
point, or the empty set. For example, as three parallel planes do not have a common point, the solution
set of their equations is empty; the solution set of the equations of three planes intersecting at a point is
single point; if three planes pass through two points, their equations have at least two common
solutions; in fact the solution set is infinite and consists in all the line passing through these points. [2]
For n variables, each linear equation determines a hyperplane in n-dimensional space. The solution set
is the intersection of these hyperplanes, which may be a flat of any dimension.
General behavior[edit]
The solution set for two equations in three variables is usually a line.
In general, the behavior of a linear system is determined by the relationship between the number of
equations and the number of unknowns:
Usually, a system with fewer equations than unknowns has infinitely many solutions, but it may have
no solution. Such a system is known as anunderdetermined system.
Usually, a system with the same number of equations and unknowns has a single unique solution.
Usually, a system with more equations than unknowns has no solution. Such a system is also known
as an overdetermined system.
In the first case, the dimension of the solution set is usually equal to n − m, where n is the number of
variables and m is the number of equations.
The following pictures illustrate this trichotomy in the case of two variables:
The first system has infinitely many solutions, namely all of the points on the blue line. The second
system has a single unique solution, namely the intersection of the two lines. The third system has no
solutions, since the three lines share no common point.
Keep in mind that the pictures above show only the most common case. It is possible for a system of
two equations and two unknowns to have no solution (if the two lines are parallel), or for a system of
three equations and two unknowns to be solvable (if the three lines intersect at a single point). In
general, a system of linear equations may behave differently from expected if the equations
are linearly dependent, or if two or more of the equations are inconsistent.
Properties[edit]
Independence[edit]
The equations of a linear system are independent if none of the equations can be derived
algebraically from the others. When the equations are independent, each equation contains new
information about the variables, and removing any of the equations increases the size of the solution
set. For linear equations, logical independence is the same as linear independence.
are not independent — they are the same equation when scaled by a factor of two, and they
would produce identical graphs. This is an example of equivalence in a system of linear
equations.
are not independent, because the third equation is the sum of the other two. Indeed, any one of
these equations can be derived from the other two, and any one of the equations can be
removed without affecting the solution set. The graphs of these equations are three lines that
intersect at a single point.
Consistency[edit]
See also: Consistent and inconsistent equations
A linear system is inconsistent if it has no solution, and otherwise it is said to beconsistent. When the
system is inconsistent, it is possible to derive a contradictionfrom the equations, that may always be
rewritten as the statement 0 = 1.
are inconsistent. In fact, by subtracting the first equation from the second one and multiplying both
sides of the result by 1/6, we get 0 = 1. The graphs of these equations on the xy-plane are a pair
of parallel lines.
It is possible for three linear equations to be inconsistent, even though any two of them are consistent
together. For example, the equations
are inconsistent. Adding the first two equations together gives 3x + 2y = 2, which can be
subtracted from the third equation to yield 0 = 1. Note that any two of these equations have a
common solution. The same phenomenon can occur for any number of equations.
In general, inconsistencies occur if the left-hand sides of the equations in a system are linearly
dependent, and the constant terms do not satisfy the dependence relation. A system of equations
whose left-hand sides are linearly independent is always consistent.
Putting it another way, according to the Rouché–Capelli theorem, any system of equations
(overdetermined or otherwise) is inconsistent if the rank of the augmented matrix is greater than the
rank of the coefficient matrix. If, on the other hand, the ranks of these two matrices are equal, the
system must have at least one solution. The solution is unique if and only if the rank equals the number
of variables. Otherwise the general solution has k free parameters where k is the difference between
the number of variables and the rank; hence in such a case there are an infinitude of solutions. The
rank of a system of equations can never be higher than [the number of variables] + 1, which means
that a system with any number of equations can always be reduced to a system that has a number
of independent equations that is at most equal to [the number of variables] + 1.
Equivalence[edit]
Two linear systems using the same set of variables are equivalent if each of the equations in the
second system can be derived algebraically from the equations in the first system, and vice versa. Two
systems are equivalent if either both are inconsistent or each equation of each of them is a linear
combination of the equations of the other one. It follows that two linear systems are equivalent if and
only if they have the same solution set.
It can be difficult to describe a set with infinite solutions. Typically, some of the variables are
designated as free (orindependent, or as parameters), meaning that they are allowed to take any
value, while the remaining variables aredependent on the values of the free variables.
The solution set to this system can be described by the following equations:
Here z is the free variable, while x and y are dependent on z. Any point in the solution set can
be obtained by first choosing a value for z, and then computing the corresponding values
for x and y.
Each free variable gives the solution space one degree of freedom, the number of which is equal to
the dimension of the solution set. For example, the solution set for the above equation is a line, since a
point in the solution set can be chosen by specifying the value of the parameter z. An infinite solution of
higher order may describe a plane, or higher-dimensional set.
Different choices for the free variables may lead to different descriptions of the same solution set. For
example, the solution to the above equations can alternatively be described as follows:
1.In the first equation, solve for one of the variables in terms of the others.
2.Substitute this expression into the remaining equations. This yields a system of equations with one
fewer equation and one fewer unknown.
3.Continue until you have reduced the system to a single linear equation.
4.Solve this equation, and then back-substitute until the entire solution is found.
Solving the first equation for x gives x = 5 + 2z − 3y, and plugging this into the second and third
equation yields
Solving the first of these equations for y yields y = 2 + 3z, and plugging this into the second
equation yields z = 2. We now have:
Substituting z = 2 into the second equation gives y = 8, and substituting z = 2 and y = 8 into the
first equation yields x = −15. Therefore, the solution set is the single point (x, y, z) = (−15, 8, 2).
Row reduction[edit]
Main article: Gaussian elimination
This matrix is then modified using elementary row operations until it reaches reduced row
echelon form. There are three types of elementary row operations:
Type 1: Swap the positions of two rows.
Type 2: Multiply a row by a nonzero scalar.
Type 3: Add to one row a scalar multiple of another.
Because these operations are reversible, the augmented matrix produced always represents a linear
system that is equivalent to the original.
There are several specific algorithms to row-reduce an augmented matrix, the simplest of which
are Gaussian eliminationand Gauss-Jordan elimination. The following computation shows Gauss-
Jordan elimination applied to the matrix above:
T
h
e
l
a
s
t
m
a
t
r
i
x
i
s
i
n
r
e
d
u
c
e
d
r
o
w
e
c
h
e
l
o
n
f
o
r
m
,
a
n
d
r
e
p
r
e
s
e
n
t
s
t
h
e
s
y
s
t
e
m
−
1
5
,
8
,
2
.
c
o
m
p
a
r
i
s
o
n
w
i
t
h
t
h
e
e
x
a
m
p
l
e
i
n
t
h
e
p
r
e
v
i
o
u
s
s
e
c
t
i
o
n
o
n
t
h
e
a
l
g
e
b
r
a
i
c
e
l
i
m
i
n
a
t
i
o
n
o
f
v
a
r
i
a
b
l
e
s
s
h
o
w
s
t
h
a
t
t
h
e
s
e
t
w
o
m
e
t
h
o
d
s
a
r
e
i
n
f
a
c
t
t
h
e
s
a
m
e
;
t
h
e
d
i
f
f
e
r
e
n
c
e
l
i
e
s
i
n
h
o
w
t
h
e
c
o
m
p
u
t
a
t
i
o
n
s
a
r
e
w
r
i
t
t
e
n
d
o
w
n
.
C
r
a
m
e
r
'
s
r
u
l
e
[
e
d
i
t
]
M
a
i
n
a
r
t
i
c
l
e
:
C
r
a
m
e
r
'
s
r
u
l
e
C
r
a
m
e
r
'
s
r
u
l
e
i
s
a
n
e
x
p
l
i
c
i
t
f
o
r
m
u
l
a
f
o
r
t
h
e
s
o
l
u
t
i
o
n
o
f
s
y
s
t
e
m
o
f
l
i
n
e
a
r
e
q
u
a
t
i
o
n
s
,
w
i
t
h
e
a
c
h
v
a
r
i
a
b
l
e
g
i
v
e
n
b
y
q
u
o
t
i
e
n
t
o
f
t
w
o
d
e
t
e
r
m
i
n
a
n
t
s
.
F
o
r
e
x
a
m
p
l
e
,
t
h
e
s
o
l
u
t
i
o
n
t
o
t
h
e
s
y
s
t
e
m
is
give
n by
For
each
variable,
the
denomin
ator is
the
determin
ant of
the
matrix of
coefficie
nts,
while
the
numerat
or is the
determin
ant of a
matrix in
which
one
column
has
been
replaced
by the
vector of
constant
terms.
Though
Cramer'
s rule is
importan
t
theoretic
ally, it
has little
practical
value for
large
matrices
, since
the
computa
tion of
large
determin
ants is
somewh
at
cumbers
ome.
(Indeed,
large
determin
ants are
most
easily
compute
d using
row
reductio
n.)
Further,
Cramer'
s rule
has very
poor
numeric
al
properti
es,
making
it
unsuitab
le for
solving
even
small
systems
reliably,
unless
the
operatio
ns are
perform
ed in
rational
arithmeti
c with
unbound
ed
precisio
n.
Matrix
solutio
n
If the
equation
system
is
express
ed in the
matrix
form
the
entire
solution
set can
also be
express
ed in
matrix
form. If
the
matrix
is
square
(has
ows
and
m
ns) and
has full
rank
(all
ws are
indepen
dent),
then the
system
has a
unique
solution
given by
where
the
f
generally
regardles
of
whether
n
regardles
of the ran
of
solutions
any exist
are given
using
the
Penrose
pseudoin
se
denoted
follows:
where
vector of
paramete
ranges o
possible
vectors. A
necessar
sufficient
condition
solution(s
exist is th
potential
obtained
using
that is, th
this cond
does not
the equa
system is
inconsist
has no so
If the con
holds, the
system is
consisten
least one
solution e
For exam
the above
mentione
in which
square a
full rank,
equals
general s
equation
simplifies
previousl
stated,
where
complete
dropped
the soluti
leaving o
single so
In other c
though,
and henc
infinitude
potential
of the fre
paramete
vector
infinitude
solutions
equation.
Other
method
While sys
three or f
equations
be readily
by hand
(see
computer
often use
larger sys
The stan
algorithm
solving a
of linear
equations
based on
Gaussian
eliminatio
some
modificat
Firstly, it
essential
avoid div
small num
which ma
to inaccu
results. T
be done
reorderin
equations
necessar
process k
as
Secondly
algorithm
not exact
Gaussian
eliminatio
computes
decompo
the matri
This is m
organizat
tool, but i
much qui
one has t
several s
with the s
matrix
different
vectors
If the
matrix
some spe
structure
can be ex
to obtain
or more a
algorithm
instance,
systems
a
tive
definite
can be so
twice as
the
decompo
evinson
recursion
fast meth
for
matrices
Special m
exist also
matrices
many zer
elements
called
matrices
appear o
applicatio
A comple
different
approach
often take
very large
systems,
would oth
take too m
time or m
The idea
start with
initial
approxim
the soluti
(which do
have to b
accurate
and to ch
this
approxim
several s
bring it cl
the true s
Once the
approxim
sufficient
accurate,
taken to b
solution t
system. T
leads to t
class of
methods
Hom
neou
syste
edit
See
also:
ous differ
equation
A system
linear equ
is
s
constant
are zero:
A homog
system is
to a matr
of the for
where
an
column v
with
the
vector
Solution
Every ho
system h
solution,
the
solution
solution
obtained
the value
of the var
system h
singular m
0) then it
solution.
has a
matrix
solution s
infinite nu
solutions
set has th
additiona
1.
2.
These ar
propertie
the soluti
a
In particu
set to a h
system is
the
correspo
Numerica
homogen
can be fo
an
Relation
nonhom
systems
There is
relationsh
solutions
system a
to the cor
homogen
Specifica
specific s
system
entire sol
described
Geometr
solution s
a
for
the
obtained
subspace
system b
This reas
system
solution.
the vecto
the