0% found this document useful (0 votes)
13 views1 page

FDSBFGMCNG

The document discusses linear equations and systems of linear equations, defining them and explaining their possible solutions: inconsistent (no solution), consistent (one solution), or infinitely many solutions. It introduces Gaussian elimination as a method for solving these systems and outlines the properties of matrix multiplication and arithmetic. Additionally, it hints at more efficient methods for solving systems of equations beyond Gaussian elimination.

Uploaded by

tapunandi11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views1 page

FDSBFGMCNG

The document discusses linear equations and systems of linear equations, defining them and explaining their possible solutions: inconsistent (no solution), consistent (one solution), or infinitely many solutions. It introduces Gaussian elimination as a method for solving these systems and outlines the properties of matrix multiplication and arithmetic. Additionally, it hints at more efficient methods for solving systems of equations beyond Gaussian elimination.

Uploaded by

tapunandi11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 1

An equation of this kind is called a linear equation in the variables x and y.

More generally, we define a


linear equation in the n variables x1, . . . , xn to be one that can be expressed in the form a1x1 + a2x2 + ·
· · + anxn = b (1.1) where a1, a2, . . . an and b are real constants. Definition. A finite set of linear
equations in the variables x1, x2, . . . , xn is called a system of linear equations. Not all systems of linear
equations has solutions. A system of equations that has no solution is said to be inconsistent. If there is
at least one solution, it is called consistent. To illustrate the possibilities that can occur in solving
systems of linear equations, consider a general system of two linear equations in the unknowns x and y:
a1x + b1y = c1 a2x + b2y = c2 2 The graphs of these equations are lines; call them l1 and l2. Since a point
(x, y) lies on a line if and only if the numbers x and y satisfy the equation of the line, the solutions of the
system of equations will correspond to points of intersection of l1 and l2. There are three possibilities: ❉
❉ ❉ ❉ ❉ ❉ ❉ ❉ ❉ ❉ ❉ ❉ ❉ ❉ ❉ ❉ l1 l2 (a) ❉ ❉ ❉ ❉ ❉ ❉ ❉ ❉ l1 l2 (b) ☎ ☎ ☎ ☎ ☎ ☎ ☎ ☎
l1, l2 (c) Figure 1.1. (a) no solution, (b) one solution, (c) infinitely many solutions The three possiblities
illustrated in Figure 1.1 are as follows: (a) l1 and l2 are parallel, in which case there is no intersection,
and consequently no solution to the system. (b) l1 and l2 intersect at only one point, in which case the
system has exactly one solu (c) l1 and l2 coincide, in which case there are infinitely many points of
intersection, and consequently infinitely many solutions to the system. Although we have considered
only two equations with two unknowns here, we will show later that this same result holds for arbitrary
systems; that is, every system of linear equations has either no solutions, exactly one solution, or
infinitely many solutions. 1.2 GAUSSIAN ELIMINATION In this section we give a systematic procedure for
solving systems of linear equations; it is based on the idea of reducing the augmented matrix to a form
that is simple enough so that the system of equations can be solved by inspection. Remark. It is not
difficult to see that a matrix in row-echelon form must have zeros below each leading 1. In contrast a
matrix in reduced row-echelon form must have zeros above and below each leading 1. As a direct result
of Figure 1.1 on page 3 we have the following important theorem. Theorem 1.2.1. A homogenous
system of linear equations with more unknowns than equations always has infinitely many solutions The
definition of matrix multiplication requires that the number of columns of the first factor A be the same
as the number of rows of the second factor B in order to form the product AB. If this condition is not
satisfied, the product is undefined. A convenient way to determine whether a product of two matrices is
defined is to write down the size of the first factor and, to the right of it, write down the size of the
second factor. If, as in Figure 1.2, the inside numbers are the same, then the product is defined. The
outside numbers then give the size of the product. 4 m × r A r × n B = m × n AB outside inside Figure 1.2.
Inside and outside numbers of a matrix multiplication problem of A×B = AB, showing how the inside
dimensions are dropped and the dimensions of the product are the outside dimenions. Although the
commutative law for multiplication is not valid in matrix arithmetic, many familiar laws of arithmetic are
valid for matrices. Some of the most important ones and their names are summarized in the following
proposition. Proposition 1.2.2. Assuming that the sizes of the matrices are such that the indicated
operations can be performed, the following rules of matrix arithmetic are valid. (a) A + B = B + A
(Commutative law for addition) (b) A + (B + C) = (A + B) + C (Associative law for addition) (c) A(BC) =
(AB)C (Associative law for multiplication) 1.3 FURTHER RESULTS ON SYSTEMS OF EQUATIONS In this
section we shall establish more results about systems of linear equations and invertibility of matrices.
Our work will lead to a method for solving n equations in n unknowns that is more efficient than
Gaussian elimination for certain kinds of problems.

You might also like