0% found this document useful (0 votes)
14 views

topics of linear algebra

The document provides a comprehensive list of prime numbers followed by an overview of key topics in a linear algebra course. It includes definitions for concepts such as eigenvalues, eigenvectors, inner products, linear transformations, and matrix operations. Additionally, it outlines various topics related to linear algebra, including linear equations, matrices, and vector spaces.

Uploaded by

Feliculo Japona
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

topics of linear algebra

The document provides a comprehensive list of prime numbers followed by an overview of key topics in a linear algebra course. It includes definitions for concepts such as eigenvalues, eigenvectors, inner products, linear transformations, and matrix operations. Additionally, it outlines various topics related to linear algebra, including linear equations, matrices, and vector spaces.

Uploaded by

Feliculo Japona
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 35

2 3 5 7 11 13 17 19 23 29 31 37 41 43 47

53 59 61 67 71 73 79 83 89 97 101 103 107 109 113


127 131 137 139 149 151 157 163 167 173
179 181 191 193 197 199 211 223 227 229
233 239 241 251 257 263 269 271 277 281
283 293 307 311 313 317 331 337 347 349
353 359 367 373 379 383 389 397 401 409
419 421 431 433 439 443 449 457 461 463
467 479 487 491 499 503 509 521 523 541
547 557 563 569 571 577 587 593 599 601
607 613 617 619 631 641 643 647 653 659
661 673 677 683 691 701 709 719 727 733
739 743 751 757 761 769 773 787 797 809
811 821 823 827 829 839 853 857 859 863
877 881 883 887 907 911 919 929 937 941
947 953 967 971 977 983 991 997 1009 1013
1019 1021 1031 1033 1039 1049 1051 1061 1063 1069
1087 1091 1093 1097 1103 1109 1117 1123 1129 1151
1153 1163 1171 1181 1187 1193 1201 1213 1217 1223
1229 1231 1237 1249 1259 1277 1279 1283 1289 1291
1297 1301 1303 1307 1319 1321 1327 1361 1367 1373
1381 1399 1409 1423 1427 1429 1433 1439 1447 1451
1453 1459 1471 1481 1483 1487 1489 1493 1499 1511
1523 1531 1543 1549 1553 1559 1567 1571 1579 1583
1597 1601 1607 1609 1613 1619 1621 1627 1637 1657
1663 1667 1669 1693 1697 1699 1709 1721 1723 1733
1741 1747 1753 1759 1777 1783 1787 1789 1801 1811
1823 1831 1847 1861 1867 1871 1873 1877 1879 1889
1901 1907 1913 1931 1933 1949 1951 1973 1979 1987
1993 1997 1999 2003 2011 2017 2027 2029 2039 2053
2063 2069 2081 2083 2087 2089 2099 2111 2113 2129
2131 2137 2141 2143 2153 2161 2179 2203 2207 2213
2221 2237 2239 2243 2251 2267 2269 2273 2281 2287
2293 2297 2309 2311 2333 2339 2341 2347 2351 2357
2371 2377 2381 2383 2389 2393 2399 2411 2417 2423
2437 2441 2447 2459 2467 2473 2477 2503 2521 2531
2539 2543 2549 2551 2557 2579 2591 2593 2609 2617
2621 2633 2647 2657 2659 2663 2671 2677 2683 2687
2689 2693 2699 2707 2711 2713 2719 2729 2731 2741
2749 2753 2767 2777 2789 2791 2797 2801 2803 2819
2833 2837 2843 2851 2857 2861 2879 2887 2897 2903
2909 2917 2927 2939 2953 2957 2963 2969 2971 2999
3001 3011 3019 3023 3037 3041 3049 3061 3067 3079
3083 3089 3109 3119 3121 3137 3163 3167 3169 3181
3187 3191 3203 3209 3217 3221 3229 3251 3253 3257
3259 3271 3299 3301 3307 3313 3319 3323 3329 3331
3343 3347 3359 3361 3371 3373 3389 3391 3407 3413
3433 3449 3457 3461 3463 3467 3469 3491 3499 3511
3517 3527 3529 3533 3539 3541 3547 3557 3559 3571
3581 3583 3593 3607 3613 3617 3623 3631 3637 3643
3659 3671 3673 3677 3691 3697 3701 3709 3719 3727
3733 3739 3761 3767 3769 3779 3793 3797 3803 3821
3823 3833 3847 3851 3853 3863 3877 3881 3889 3907
3911 3917 3919 3923 3929 3931 3943 3947 3967 3989
4001 4003 4007 4013 4019 4021 4027 4049 4051 4057
4073 4079 4091 4093 4099 4111 4127 4129 4133 4139
4153 4157 4159 4177 4201 4211 4217 4219 4229 4231
4241 4243 4253 4259 4261 4271 4273 4283 4289 4297
4327 4337 4339 4349 4357 4363 4373 4391 4397 4409
4421 4423 4441 4447 4451 4457 4463 4481 4483 4493
4507 4513 4517 4519 4523 4547 4549 4561 4567 4583
4591 4597 4603 4621 4637 4639 4643 4649 4651 4657
4663 4673 4679 4691 4703 4721 4723 4729 4733 4751
4759 4783 4787 4789 4793 4799 4801 4813 4817 4831
4861 4871 4877 4889 4903 4909 4919 4931 4933 4937
4943 4951 4957 4967 4969 4973 4987 4993 4999 5003
5009 5011 5021 5023 5039 5051 5059 5077 5081 5087
5099 5101 5107 5113 5119 5147 5153 5167 5171 5179
5189 5197 5209 5227 5231 5233 5237 5261 5273 5279
5281 5297 5303 5309 5323 5333 5347 5351 5381 5387
5393 5399 5407 5413 5417 5419 5431 5437 5441 5443
5449 5471 5477 5479 5483 5501 5503 5507 5519 5521
5527 5531 5557 5563 5569 5573 5581 5591 5623 5639
5641 5647 5651 5653 5657 5659 5669 5683 5689 5693
5701 5711 5717 5737 5741 5743 5749 5779 5783 5791
5801 5807 5813 5821 5827 5839 5843 5849 5851 5857
5861 5867 5869 5879 5881 5897 5903 5923 5927 5939
5953 5981 5987 6007 6011 6029 6037 6043 6047 6053
6067 6073 6079 6089 6091 6101 6113 6121 6131 6133
6143 6151 6163 6173 6197 6199 6203 6211 6217 6221
6229 6247 6257 6263 6269 6271 6277 6287 6299 6301
6311 6317 6323 6329 6337 6343 6353 6359 6361 6367
6373 6379 6389 6397 6421 6427 6449 6451 6469 6473
6481 6491 6521 6529 6547 6551 6553 6563 6569 6571
6577 6581 6599 6607 6619 6637 6653 6659 6661 6673
6679 6689 6691 6701 6703 6709 6719 6733 6737 6761
6763 6779 6781 6791 6793 6803 6823 6827 6829 6833
6841 6857 6863 6869 6871 6883 6899 6907 6911 6917
6947 6949 6959 6961 6967 6971 6977 6983 6991 6997
7001 7013 7019 7027 7039 7043 7057 7069 7079

Topics in a Linear Algebra Course


To learn more about a topic listed below, click the topic name to go to the corresponding MathWorld classroom page.

Eigenval One of a set of special scalars associated with a linear system of equations that describes that system's fundamental
ue modes. An eigenvector is associated with each eigenvalue.
Eigenvec One of a special set of vectors associated with a linear system of equations. An eigenvalue is associated with each
tor eigenvector.
Euclidea The space of all n-tuples of real numbers. It is the generalization of the two dimensional plane and three dimensional
n Space space.
Inner (1) In a vector space, a way to multiply vectors together, with the result of this multiplication being a scalar. (2) A
Product synonym for dot product.
Linear
The study of linear systems of equations and their transformation properties.
Algebra
Linear
A function from one vector space to another. If bases are chosen for the vector spaces, a linear transformation can be
Transfor
given by a matrix.
mation
A concise and useful way of uniquely representing and working with linear transformations. In particular, for every linear
Matrix transformation, there exists exactly one corresponding matrix, and every matrix corresponds to a unique linear
transformation. The matrix is an extremely important concept in linear algebra.
Matrix
Given a matrix M, the inverse is a new matrix M-1 that when multiplied by M, gives the identity matrix.
Inverse
Matrix
The process of multiplying two matrices (each of which represents a linear transformation), which forms a new matrix
Multiplica
corresponding to the matrix representation of the two transformations' composition.
tion
Norm A quantity that describes the length, size, or extent of a mathematical object.
Vector A set that is closed under finite vector addition and scalar multiplication. The basic example is n-dimensional Euclidean
Space space.

List of linear algebra topics


From Wikipedia, the free encyclopedia

This is a list of linear algebra topics. See also:

 List of matrices
 Glossary of tensor theory.

Contents
[hide]

1Linear equations
2Matrices
3Matrix decompositions
4Relations
5Computations
6Vector spaces
7Structures
8Multilinear algebra
9Affine space and related topics
10Projective space

Linear equations[edit]

 System of linear equations


 Determinant
 Minor
 Cauchy–Binet formula
 Cramer's rule
 Gaussian elimination
 Gauss–Jordan elimination
 Strassen algorithm

Matrices[edit]

 2 × 2 real matrices
 Matrix theory
 Matrix addition
 Matrix multiplication
 Basis transformation matrix
 Characteristic polynomial
 Trace
 Eigenvalue, eigenvector and eigenspace
 Cayley–Hamilton theorem
 Spread of a matrix
 Jordan normal form
 Weyr canonical form
 Rank
 Matrix inversion, invertible matrix
 Pseudoinverse
 Adjugate
 Transpose
 Dot product
 Symmetric matrix
 Orthogonal matrix
 Skew-symmetric matrix
 Conjugate transpose
 Unitary matrix
 Hermitian matrix, Antihermitian matrix
 Positive-definite, positive-semidefinite matrix
 Pfaffian
 Projection
 Spectral theorem
 Perron–Frobenius theorem
 List of matrices
 Diagonal matrix, main diagonal
 Diagonalizable matrix
 Triangular matrix
 Tridiagonal matrix
 Block matrix
 Sparse matrix
 Hessenberg matrix
 Hessian matrix
 Vandermonde matrix
 Stochastic matrix
 Toeplitz matrix
 Circulant matrix
 Hankel matrix
 (0,1)-matrix

Matrix decompositions[edit]

 Cholesky decomposition
 LU decomposition
 QR decomposition
 Polar decomposition
 Spectral theorem
 Singular value decomposition
 Higher-order singular value decomposition
 Schur decomposition
 Schur complement
 Haynsworth inertia additivity formula

System of linear equations


From Wikipedia, the free encyclopedia

This article includes a list of references, but its sources remain


unclear because it has insufficient inline citations. Please help
to improve this article by introducingmore precise citations. (October
2015) (Learn how and when to remove this template message)

A linear system in three variables determines a collection of planes. The intersection point is the solution.

In mathematics, a system of linear equations (or linear system) is a collection of two or more linear
equations involving the same set of variables.[1] For example,

is a system of three equations in the three variables x, y, z. A solution to a linear system is an


assignment of numbers to the variables such that all the equations are simultaneously satisfied.
A solution to the system above is given by

since it makes all three equations valid. The word "system" indicates that the equations are to
be considered collectively, rather than individually.

In mathematics, the theory of linear systems is the basis and a fundamental part of linear
algebra, a subject which is used in most parts of modern mathematics.
Computational algorithms for finding the solutions are an important part of numerical linear
algebra, and play a prominent role in engineering, physics, chemistry, computer science,
and economics. A system of non-linear equations can often be approximated by a linear
system (see linearization), a helpful technique when making amathematical model or computer
simulation of a relatively complex system.

Very often, the coefficients of the equations are real or complex numbers and the solutions are
searched in the same set of numbers, but the theory and the algorithms apply for coefficients
and solutions in any field. For solutions in an integral domain like the ring of the integers, or in
other algebraic structures, other theories have been developed, see Linear equation over a
ring. Integer linear programming is a collection of methods for finding the "best" integer
solution (when there are many). Gröbner basis theory provides algorithms when coefficients
and unknowns are polynomials. Also tropical geometry is an example of linear algebra in a
more exotic structure.

Contents
[hide]

 1Elementary example
 2General form
o 2.1Vector equation
o 2.2Matrix equation
 3Solution set
o 3.1Geometric interpretation
o 3.2General behavior
 4Properties
o 4.1Independence
o 4.2Consistency
o 4.3Equivalence
 5Solving a linear system
o 5.1Describing the solution
o 5.2Elimination of variables
o 5.3Row reduction
o 5.4Cramer's rule
o 5.5Matrix solution
o 5.6Other methods
 6Homogeneous systems
o 6.1Solution set
o 6.2Relation to nonhomogeneous systems
 7See also
 8Notes
 9References
o 9.1Textbooks

Elementary example[edit]
The simplest kind of linear system involves two equations and two variables:

One method for solving such a system is as follows. First, solve the top equation for in
terms of :

Now substitute this expression for x into the bottom equation:

This results in a single equation involving only the variable . Solving gives , and
substituting this back into the equation for yields . This method generalizes to
systems with additional variables (see "elimination of variables" below, or the
article on elementary algebra.)

General form[edit]
A general system of m linear equations with n unknowns can be written as

Here are the unknowns, are the coefficients of the system, and are the
constant terms.

Often the coefficients and unknowns are real or complex numbers,


but integers and rational numbers are also seen, as are polynomials and
elements of an abstract algebraic structure.
Vector equation[edit]
One extremely helpful view is that each unknown is a weight for a column
vector in a linear combination.

This allows all the language and theory of vector spaces (or more
generally, modules) to be brought to bear. For example, the collection of
all possible linear combinations of the vectors on the left-hand side is
called their span, and the equations have a solution just when the right-
hand vector is within that span. If every vector within that span has exactly
one expression as a linear combination of the given left-hand vectors,
then any solution is unique. In any event, the span has a basis oflinearly
independent vectors that do guarantee exactly one expression; and the
number of vectors in that basis (itsdimension) cannot be larger
than m or n, but it can be smaller. This is important because if we
have m independent vectors a solution is guaranteed regardless of the
right-hand side, and otherwise not guaranteed.
Matrix equation[edit]
The vector equation is equivalent to a matrix equation of the form

where A is an m×n matrix, x is a column vector with n entries,


and b is a column vector with m entries.

The number of vectors in a basis for the span is now expressed


as the rank of the matrix.
Solution set[edit
The solution set for the equationsx − y = −1 and 3x + y = 9 is the single point (2, 3).

A solution of a linear system is an assignment of values to the variables x1, x2, ..., xnsuch that each of
the equations is satisfied. The set of all possible solutions is called the solution set.

A linear system may behave in any one of three possible ways:

1. The system has infinitely many solutions.


2. The system has a single unique solution.
3. The system has no solution.
Geometric interpretation[edit] For a system involving two variables (x and y), each linear
equation determines a lineon the xy-plane. Because a solution to a linear system must satisfy all of the
equations, the solution set is the intersection of these lines, and is hence either a line, a single point, or
the empty set.

For three variables, each linear equation determines a plane in three-dimensional space, and the
solution set is the intersection of these planes. Thus the solution set may be a plane, a line, a single
point, or the empty set. For example, as three parallel planes do not have a common point, the solution
set of their equations is empty; the solution set of the equations of three planes intersecting at a point is
single point; if three planes pass through two points, their equations have at least two common
solutions; in fact the solution set is infinite and consists in all the line passing through these points. [2]

For n variables, each linear equation determines a hyperplane in n-dimensional space. The solution set
is the intersection of these hyperplanes, which may be a flat of any dimension.
General behavior[edit]

The solution set for two equations in three variables is usually a line.

In general, the behavior of a linear system is determined by the relationship between the number of
equations and the number of unknowns:

Usually, a system with fewer equations than unknowns has infinitely many solutions, but it may have
no solution. Such a system is known as anunderdetermined system.

Usually, a system with the same number of equations and unknowns has a single unique solution.

Usually, a system with more equations than unknowns has no solution. Such a system is also known
as an overdetermined system.

In the first case, the dimension of the solution set is usually equal to n − m, where n is the number of
variables and m is the number of equations.

The following pictures illustrate this trichotomy in the case of two variables:

The first system has infinitely many solutions, namely all of the points on the blue line. The second
system has a single unique solution, namely the intersection of the two lines. The third system has no
solutions, since the three lines share no common point.

Keep in mind that the pictures above show only the most common case. It is possible for a system of
two equations and two unknowns to have no solution (if the two lines are parallel), or for a system of
three equations and two unknowns to be solvable (if the three lines intersect at a single point). In
general, a system of linear equations may behave differently from expected if the equations
are linearly dependent, or if two or more of the equations are inconsistent.

Properties[edit]
Independence[edit]
The equations of a linear system are independent if none of the equations can be derived
algebraically from the others. When the equations are independent, each equation contains new
information about the variables, and removing any of the equations increases the size of the solution
set. For linear equations, logical independence is the same as linear independence.

The equations x − 2y = −1,3x + 5y = 8, and 4x + 3y = 7 are linearly dependent.

For example, the equations

are not independent — they are the same equation when scaled by a factor of two, and they
would produce identical graphs. This is an example of equivalence in a system of linear
equations.

For a more complicated example, the equations

are not independent, because the third equation is the sum of the other two. Indeed, any one of
these equations can be derived from the other two, and any one of the equations can be
removed without affecting the solution set. The graphs of these equations are three lines that
intersect at a single point.
Consistency[edit]
See also: Consistent and inconsistent equations

The equations 3x + 2y = 6 and 3x + 2y = 12 are inconsistent.

A linear system is inconsistent if it has no solution, and otherwise it is said to beconsistent. When the
system is inconsistent, it is possible to derive a contradictionfrom the equations, that may always be
rewritten as the statement 0 = 1.

For example, the equations

are inconsistent. In fact, by subtracting the first equation from the second one and multiplying both
sides of the result by 1/6, we get 0 = 1. The graphs of these equations on the xy-plane are a pair
of parallel lines.
It is possible for three linear equations to be inconsistent, even though any two of them are consistent
together. For example, the equations

are inconsistent. Adding the first two equations together gives 3x + 2y = 2, which can be
subtracted from the third equation to yield 0 = 1. Note that any two of these equations have a
common solution. The same phenomenon can occur for any number of equations.

In general, inconsistencies occur if the left-hand sides of the equations in a system are linearly
dependent, and the constant terms do not satisfy the dependence relation. A system of equations
whose left-hand sides are linearly independent is always consistent.

Putting it another way, according to the Rouché–Capelli theorem, any system of equations
(overdetermined or otherwise) is inconsistent if the rank of the augmented matrix is greater than the
rank of the coefficient matrix. If, on the other hand, the ranks of these two matrices are equal, the
system must have at least one solution. The solution is unique if and only if the rank equals the number
of variables. Otherwise the general solution has k free parameters where k is the difference between
the number of variables and the rank; hence in such a case there are an infinitude of solutions. The
rank of a system of equations can never be higher than [the number of variables] + 1, which means
that a system with any number of equations can always be reduced to a system that has a number
of independent equations that is at most equal to [the number of variables] + 1.

Equivalence[edit]
Two linear systems using the same set of variables are equivalent if each of the equations in the
second system can be derived algebraically from the equations in the first system, and vice versa. Two
systems are equivalent if either both are inconsistent or each equation of each of them is a linear
combination of the equations of the other one. It follows that two linear systems are equivalent if and
only if they have the same solution set.

Solving a linear system[edit]


There are several algorithms for solving a system of linear equations.
Describing the solution[edit]
When the solution set is finite, it is reduced to a single element. In this case, the unique solution is
described by a sequence of equations whose left-hand sides are the names of the unknowns and right-
hand sides are the corresponding values, for example . When an order on the unknowns has been
fixed, for example the alphabetical order the solution may be described as a vector of values, like for
the previous example.

It can be difficult to describe a set with infinite solutions. Typically, some of the variables are
designated as free (orindependent, or as parameters), meaning that they are allowed to take any
value, while the remaining variables aredependent on the values of the free variables.

For example, consider the following system:

The solution set to this system can be described by the following equations:
Here z is the free variable, while x and y are dependent on z. Any point in the solution set can
be obtained by first choosing a value for z, and then computing the corresponding values
for x and y.

Each free variable gives the solution space one degree of freedom, the number of which is equal to
the dimension of the solution set. For example, the solution set for the above equation is a line, since a
point in the solution set can be chosen by specifying the value of the parameter z. An infinite solution of
higher order may describe a plane, or higher-dimensional set.

Different choices for the free variables may lead to different descriptions of the same solution set. For
example, the solution to the above equations can alternatively be described as follows:

Here x is the free variable, and y and z are dependent.


Elimination of variables[edit]
The simplest method for solving a system of linear equations is to repeatedly eliminate variables. This
method can be described as follows:

1.In the first equation, solve for one of the variables in terms of the others.

2.Substitute this expression into the remaining equations. This yields a system of equations with one
fewer equation and one fewer unknown.

3.Continue until you have reduced the system to a single linear equation.

4.Solve this equation, and then back-substitute until the entire solution is found.

For example, consider the following system:

Solving the first equation for x gives x = 5 + 2z − 3y, and plugging this into the second and third
equation yields
Solving the first of these equations for y yields y = 2 + 3z, and plugging this into the second
equation yields z = 2. We now have:
Substituting z = 2 into the second equation gives y = 8, and substituting z = 2 and y = 8 into the
first equation yields x = −15. Therefore, the solution set is the single point (x, y, z) = (−15, 8, 2).
Row reduction[edit]
Main article: Gaussian elimination

n row reduction, the linear system is represented as an augmented matrix:

This matrix is then modified using elementary row operations until it reaches reduced row
echelon form. There are three types of elementary row operations:
Type 1: Swap the positions of two rows.
Type 2: Multiply a row by a nonzero scalar.
Type 3: Add to one row a scalar multiple of another.

Because these operations are reversible, the augmented matrix produced always represents a linear
system that is equivalent to the original.
There are several specific algorithms to row-reduce an augmented matrix, the simplest of which
are Gaussian eliminationand Gauss-Jordan elimination. The following computation shows Gauss-
Jordan elimination applied to the matrix above:

T
h
e

l
a
s
t

m
a
t
r
i
x

i
s

i
n

r
e
d
u
c
e
d

r
o
w

e
c
h
e
l
o
n

f
o
r
m
,

a
n
d

r
e
p
r
e
s
e
n
t
s

t
h
e

s
y
s
t
e
m


1
5
,

8
,

2
.

c
o
m
p
a
r
i
s
o
n

w
i
t
h

t
h
e

e
x
a
m
p
l
e

i
n

t
h
e

p
r
e
v
i
o
u
s

s
e
c
t
i
o
n

o
n

t
h
e

a
l
g
e
b
r
a
i
c

e
l
i
m
i
n
a
t
i
o
n

o
f

v
a
r
i
a
b
l
e
s

s
h
o
w
s

t
h
a
t

t
h
e
s
e

t
w
o

m
e
t
h
o
d
s

a
r
e

i
n

f
a
c
t

t
h
e

s
a
m
e
;

t
h
e
d
i
f
f
e
r
e
n
c
e

l
i
e
s

i
n

h
o
w

t
h
e

c
o
m
p
u
t
a
t
i
o
n
s

a
r
e

w
r
i
t
t
e
n

d
o
w
n
.
C
r
a
m
e
r
'
s

r
u
l
e
[
e
d
i
t
]
M
a
i
n

a
r
t
i
c
l
e
:

C
r
a
m
e
r
'
s

r
u
l
e

C
r
a
m
e
r
'
s

r
u
l
e

i
s

a
n

e
x
p
l
i
c
i
t

f
o
r
m
u
l
a

f
o
r

t
h
e

s
o
l
u
t
i
o
n

o
f

s
y
s
t
e
m
o
f

l
i
n
e
a
r

e
q
u
a
t
i
o
n
s
,

w
i
t
h

e
a
c
h

v
a
r
i
a
b
l
e

g
i
v
e
n

b
y

q
u
o
t
i
e
n
t

o
f

t
w
o

d
e
t
e
r
m
i
n
a
n
t
s
.

F
o
r

e
x
a
m
p
l
e
,

t
h
e

s
o
l
u
t
i
o
n

t
o

t
h
e

s
y
s
t
e
m

is
give
n by

For
each
variable,
the
denomin
ator is
the
determin
ant of
the
matrix of
coefficie
nts,
while
the
numerat
or is the
determin
ant of a
matrix in
which
one
column
has
been
replaced
by the
vector of
constant
terms.

Though
Cramer'
s rule is
importan
t
theoretic
ally, it
has little
practical
value for
large
matrices
, since
the
computa
tion of
large
determin
ants is
somewh
at
cumbers
ome.
(Indeed,
large
determin
ants are
most
easily
compute
d using
row
reductio
n.)
Further,
Cramer'
s rule
has very
poor
numeric
al
properti
es,
making
it
unsuitab
le for
solving
even
small
systems
reliably,
unless
the
operatio
ns are
perform
ed in
rational
arithmeti
c with
unbound
ed
precisio
n.
Matrix
solutio
n
If the
equation
system
is
express
ed in the
matrix
form
the
entire
solution
set can
also be
express
ed in
matrix
form. If
the
matrix
is
square
(has
ows
and
m
ns) and
has full
rank
(all
ws are
indepen
dent),
then the
system
has a
unique
solution
given by

where
the
f
generally
regardles
of
whether
n
regardles
of the ran
of
solutions
any exist
are given
using
the
Penrose
pseudoin
se
denoted
follows:

where
vector of
paramete
ranges o
possible
vectors. A
necessar
sufficient
condition
solution(s
exist is th
potential
obtained
using
that is, th
this cond
does not
the equa
system is
inconsist
has no so
If the con
holds, the
system is
consisten
least one
solution e
For exam
the above
mentione
in which
square a
full rank,
equals
general s
equation
simplifies
previousl
stated,
where
complete
dropped
the soluti
leaving o
single so
In other c
though,
and henc
infinitude
potential
of the fre
paramete
vector
infinitude
solutions
equation.
Other
method
While sys
three or f
equations
be readily
by hand
(see
computer
often use
larger sys
The stan
algorithm
solving a
of linear
equations
based on
Gaussian
eliminatio
some
modificat
Firstly, it
essential
avoid div
small num
which ma
to inaccu
results. T
be done
reorderin
equations
necessar
process k
as
Secondly
algorithm
not exact
Gaussian
eliminatio
computes
decompo
the matri
This is m
organizat
tool, but i
much qui
one has t
several s
with the s
matrix
different
vectors

If the
matrix
some spe
structure
can be ex
to obtain
or more a
algorithm
instance,
systems
a
tive
definite
can be so
twice as
the
decompo
evinson
recursion
fast meth
for
matrices
Special m
exist also
matrices
many zer
elements
called
matrices
appear o
applicatio

A comple
different
approach
often take
very large
systems,
would oth
take too m
time or m
The idea
start with
initial
approxim
the soluti
(which do
have to b
accurate
and to ch
this
approxim
several s
bring it cl
the true s
Once the
approxim
sufficient
accurate,
taken to b
solution t
system. T
leads to t
class of
methods

Hom
neou
syste
edit
See
also:
ous differ
equation

A system
linear equ
is
s
constant
are zero:

A homog
system is
to a matr
of the for

where
an
column v
with
the
vector
Solution
Every ho
system h
solution,
the
solution
solution
obtained
the value
of the var
system h
singular m
0) then it
solution.
has a
matrix
solution s
infinite nu
solutions
set has th
additiona

1.

2.

These ar
propertie
the soluti
a
In particu
set to a h
system is
the
correspo
Numerica
homogen
can be fo
an
Relation
nonhom
systems
There is
relationsh
solutions
system a
to the cor
homogen
Specifica
specific s
system
entire sol
described

Geometr
solution s
a
for
the
obtained
subspace
system b

This reas
system
solution.
the vecto
the

You might also like