MODULE-1 (1)
MODULE-1 (1)
LEARNING OBJECTIVES
At the end of this module, you are expected to:
1. Solve the Taylor Series of a function.
2. Solve a linear equation with the use of matrix analysis.
3. Discuss problems using Taylor series and matrix analysis.
4. Use theorems on matrix analysis to solve problems in civil engineering.
KEY TERMS
Adjoint Matrix
Cofactors
Cramer’s Rule
Determinant
Eigenvalue
Inverse Matrix
Taylor Series
Differentiation:
Integration:
Note how now it is a ‘dt’ at the end instead of a ‘dx’. In general, you write ‘d_’ and put
whatever in the function is changing – in this case ‘t’, in previous cases it was ‘x’. So by integrating
we can change the velocity function into a position function. Let’s work out what the position
function is:
Integrating we get:
We now have an expression for the truck’s position after ‘t’ seconds. So now we work out
what this expression equals for 10 seconds (the top number):
Then we work out what this expression equals for 0 seconds (the bottom number):
Equation 1.2.1
for
equation 1.2.2
and
equation 1.2.3
Moreover, there exist a point εx between x and x0 such that,
equation 1.2.4
The point x0 is usually chosen at the discretion of the user, and is often taken to be 0. Note
that the two forms of the remainder are equivalent: the "pointwise" form (1.2.4) can be derived from
the "integral" form (1.2.3).
Taylor's Theorem is important because it allows us to represent, exactly, fairly general
functions in terms of polynomials with a known, specified, boundable error. This allows us to
replace, in a computational setting, these same general functions with something that is much
simpler—a polynomial—yet at the same time we are able to bound the error that is made. No other
tool will be as important to us as Taylor's Theorem, so it is worth spending some time on it here at
the outset.
The usual calculus treatment of Taylor's Theorem should leave the student familiar with
three particular expansions (for all three of these we have used x0 = 0, which means we really
should call them Maclaurin series, but we won't):
(Strictly speaking, the indices on the last two remainders should be 2n + 1 and 2n, because
those are the exponents in the last terms of the expansion, but it is commonplace to present them
as we did here.) In fact, Taylor's Theorem provides us with our first and simplest example of an
approximation and an error estimate. Consider the problem of approximating the exponential
function on the interval [—1,1]. Taylor's Theorem tells us that we can represent ex using a
polynomial with a (known) remainder:
where cx is an unknown point between x and 0. Since we want to consider the most general
case, where x can be any point in [—1,1], we have to consider that cx can be any point in [—1,1],
as well. For simplicity, let's denote the polynomial by pn{x), and the remainder by Rn(x), so that the
equation above becomes
Suppose that we want this approximation to be accurate to within 10- 6 in absolute error,
i.e., we want
or all x in the interval [—1,1]. Note that if we can make l Rn(x)I ≤ 10-6 for all x ∈ [-1,1], then
we will have
so that the error in the approximation will be less than 10– 6. The best way to proceed is to create
a simple upper bound for |R(x)|, and then use that to determine the number of terms necessary to
make this upper bound less than 10- 6.
and we will know that the error is less than the desired tolerance, for all values of x of interest to
us, i.e., all x ∈ [—1,1]. A little bit of exercise with a calculator shows us that we need to use n = 9
to get the desired accuracy. Figure 1.1 shows a plot of the exponential function ex, the
approximation p9 (x), as well as the less accurate p2 (x); since it is impossible to distinguish by eye
between the plots for e1 and p9 (x), we have also provided Figure 1.2, which is a plot of the error
ex — p9(x); note that we begin to lose accuracy as we get away from the interval [-1, 1]. This is not
surprising. Since the Taylor polynomial is constructed to match / and its first n derivatives at x = x0,
it ought to be the case that pn is a good approximation to / only when x is near x0.
Here we are in the first section of the text, and we have already constructed our first
approximation and proved our first theoretical result. Theoretical result? Where? Why, the error
estimate, of course. The material in the previous several lines is a proof of Proposition 1.1.
Proposition 1.1 Let p9(x) be the Taylor polynomial of degree 9 for the exponential function. Then,
for all x ∈ [—1,1], the error in the approximation ofp9(x) to ex is less than 10 -6 , i.e.,
Although this result is not of major significance to our work—the exponential function is
approximated more efficiently by different means—it does illustrate one of the most important
aspects of the subject. The result tells us, ahead of time, that we can approximate the exponential
function to within 10- 6 accuracy using a specific polynomial, and this accuracy holds for all a; in a
specified interval. That is the kind of thing we will be doing throughout the text—constructing
approximations to difficult computations that are accurate in some sense, and we know how
accurate they are.
Figure 1. Taylor approximation: e1 (solid line), p9(x) ≈ ex (circles), and p2(x) ≈ e1 (dashed line). Note that
ex and p9 (x) are indistinguishable on this plot.
EXAMPLE 2: Let f(x) = y/x + 1; then the second-order Taylor polynomial (computed
about x0 = 0) is computed as follows:
f(x0) = f(0) = 1;
f '(x) = (1/2)(x + 1) - 1 / 2 f ' (x0) = 1/2
-1/2
f ''(x) = (-1/2)(1/2) (x + 1) f '' (x0) = -1/4
p2(x) = f(x0) + (x – x0) f'(x0) + (1/2) (x – x0)2 f"{x0) = 1 + (½) x – (1/8) x2
The error in using p2 to approximate √𝑥 + 1 is given by R2 (x) = (1/3!) (x – x0 ) 3 f "' (𝜀𝑥 ).
where 𝜀𝑥 is between x and x0. We can simplify the error as follows:
If we want to consider this for all x ∈ [0,1], then, we observe that x ∈ [0,1] and 𝜀𝑥 between x and
0 imply that 𝜀𝑥 ∈ [0,1]; therefore
so that the upper bound on the error becomes |R3(x)| ≤ 1/16 = 0.0625, for all x ∈ [0,1]. If we are
only interested in x ∈ [0, 1/2], then the error is much smaller:
Solution 1
As with the first example we’ll need to get a formula for f(k)(0) . However, unlike the first one
we’ve got a little more work to do. Let’s first take some derivatives and evaluate them at x = 0.
f(0)(x) = e−x f(0)(0) = 1
f(1)(x) = -e−x f(1)(0) = -1
f(2)(x) = e−x f(2)(0) = 1
(3) −x
f (x) = -e f(3)(0) = -1
: :
: :
f(k)(x) = (-1)ke−x f(k)(0) = (-1)k k = 0,1,2,3
(−1)𝑘
e-x = ∑∞
𝑘=0 𝑥𝑘
𝑘!
Solution 2
The previous solution wasn’t too bad and we often have to do things in that manner.
However, in this case there is a much shorter solution method. In the previous section we used
series that we’ve already found to help us find a new series. Let’s do the same thing with this one.
We already know a Taylor Series for ex about x=0 and in this case the only difference is we’ve got
a “-x” in the exponent instead of just an x.
So, all we need to do is replace the xx in the Taylor Series that we found in the second
example with “-x”.
(𝑥−𝑥0 )𝑘
Pn (x) = ∑𝑛
𝑘=0 𝑓 (𝑘) (𝑥0 ) n=∞
𝑘!
(−𝑥)𝑘 (−1)𝑘
ex = ∑∞
𝑘=0 = ∑∞
𝑘=0 𝑥𝑘
𝑘! 𝑘!
This is a much shorter method of arriving at the same answer so don’t forget about using
previously computed series where possible (and allowed of course).
EXAMPLE 5: Find the Taylor Series for f(x) = x4𝑒 −3𝑥 about x = 0.
2
Solution:
For this example, we will take advantage of the fact that we already have a Taylor Series
for ex about x=0. In this example, unlike the previous example, doing this directly would be
significantly longer and more difficult.
2 ∞ (−3𝑥 2 )𝑘
x4𝑒 −3𝑥 = x4 ∑𝑘=0
𝑘!
𝑘
(−3)𝑘 (𝑥 2 )
= x ∑∞
4
𝑘=0 𝑘!
(−3)𝑘 (𝑥 2𝑘+4 )
= ∑∞
𝑘=0 𝑘!
EXAMPLE 6: Find the Taylor Series for f(x) = e-x about x = -4.
Solution:
Let’s plug what we’ve got into the Taylor series and see what we get
𝑛 𝑓(𝑘) (0)
cos x = ∑𝑘=0 (𝑥 − 𝑥0 )(𝑘)
𝑘!
𝑓′′(0) 𝑓′′′(0) 𝑓4 (0) 𝑓5 (0)
= f(0) + 𝑓′(0)x + x2 + x3 + x4 + x5 +....
2! 3! 4! 5!
1 1 1
=1 + 0 - x2 + 0 + x4 + 0 - x6 + ...
2! 4! 6!
(k =0) (k =1) (k =2) (k =3) (k =4) (k =5) (k =6)
We only pick up terms with even powers on the x’s. This doesn’t really help us to get a general
formula for the Taylor Series. However, let’s drop the zeroes and “renumber” the terms as follows
to see what we can get.
1 1 1
cos x = 1 - x2 + x4 - x6 + ...
2! 4! 6!
(k =0) (k =1) (k =2) (k =3)
By renumbering the terms as we did we can actually come up with a general formula for the Taylor
Series and here it is,
∞ (−1)𝑘
cos x = ∑𝑘=0 𝑥 2𝑘
(2𝑘)!
(−1)𝑘
sin x = ∑∞
𝑘=0 𝑥 2𝑘+1
(2𝑘+1)!
: :
: :
(−1)𝑘+1 (𝑘−1)! (−1)𝑘+1 (𝑘−1)!
f(k)(x) = f(k)(2) = k = 1,2,3....
𝑥𝑘 2𝑘
Note that while we got a general formula here it doesn’t work for k = 0. This will happen on occasion
so don’t worry about it when it does. In order to plug this into the Taylor Series formula we’ll need
to strip out the k = 0 term first.
(𝑥−𝑥0 )𝑘
Pn (x) = ∑𝑛
𝑘=0 𝑓 (𝑘) (𝑥0 ) n=∞
𝑘!
(𝑘)
𝑓 (2)
ln x = ∑∞
𝑘=0 (𝑥 − 2)(𝑘)
𝑘!
𝑓 (𝑘) (2)
= f(2) + ∑∞
𝑘=1 (𝑥 − 2)(𝑘)
𝑘!
(−1)𝑘+1 (𝑘−1)!
= ln 2 + ∑∞
𝑘=1 (𝑥 − 2)(𝑘)
𝑘! 2𝑘
(−1)𝑘+1
= ln 2 + ∑∞
𝑘=1 (𝑥 − 2)(𝑘)
𝑘 2𝑘
Notice that we simplified the factorials in this case. You should always simplify them if there are
more than one and it’s possible to simplify them.
Also, do not get excited about the term sitting in front of the series. Sometimes we need to do that
when we can’t get a general formula that will hold for all values of k.
EXAMPLE 10: 1
Find the Taylor Series for f(x) = about x = -1.
𝑥2
Solution:
Again, here are the derivatives and evaluations.
1 1
f(0)(x) = f(0)(-1) =
𝑥2 −12
2 2
f(1)(x) = - f(1)(-1) = -
𝑥3 −13
2(3) 2(3)
f(2)(x) = f(2)(-1) =
𝑥4 −14
2(3)(4) 2(3)(4)
f(3)(x) = - f(3)(-1) = -
𝑥5 −15
: :
: :
(−1)𝑘 (𝑘+1)! (−1)𝑘 (𝑘+1)!
f(k)(x) = f(k)(-1) = = (k + 1)!
𝑥 𝑘+2 −1𝑘+2
Notice that all the negative signs will cancel out in the evaluation. Also, this formula will work for
all k, unlike the previous example.
EXAMPLE 11: Find the Taylor Series for f(x) = x3 – 10x2 + 6 about x = 3.
Solution:
Here are the derivatives for this problem.
f(0)(x) = x3 – 10x2 + 6 f(0)(3) = - 57
f (x) = 3x – 20x
(1) 2
f(1)(3) = - 33
f(2)(x) = 6x – 20 f(2)(3) = -2
f(3)(x) = 6 f(3)(3) = 6
(4)
f (x) = 0 f(4)(3) = 0
f(k)(x) = 0 when k ≥ 4
This Taylor series will terminate after k = 3. This will always happen when we are finding the
Taylor Series of a polynomial. Here is the Taylor Series for this one.
∞ 𝑓(𝑘) (3)
x3 – 10x2 + 6 = ∑𝑘=0 (𝑥 − 3)(𝑘)
𝑘!
𝑓′′(3) 𝑓′′′(3)
= 𝑓(3) + 𝑓′(3)(x-3) + (x-3)2 + (x-3)3 + 0
2! 3!
= −57 − 33(x−3) − (x−3)2 + (x−3)3
INSTRUCTIONS: Find the Taylor Series for each of the following functions. Write your answer and
solution in your notebook. Once you finish answering, submit a picture of your output via google
classroom. Please observe cleanliness to your work.
3𝜋
1) f(x) = sin(x) about x =
2
2) f(x) = ln(1 – x) about x = -2
3) f(x) = x3 + 9x2 - 10x + 2 about x = 3
4) f(x) = √2 + 𝑥 about x = 1
5) f(x) = e1 – 8x about x = 3
1.3.1. Definitions:
(a) An array of real numbers
is called an m × n matrix with m rows and n columns. The aij is referred to as the i, jth
element and denotes the element in the ith row and jth column. If m = n then A is called a
square matrix of order n. If the matrix has one column or one row then it is called a column
vector or a row vector respectively.
(b) In a square matrix A of order n the diagonal containing the elements a11, a22, . . . , ann is
called the principal or leading diagonal. The sum of the elements in this diagonal is called
the trace of A, that is
(c) A diagonal matrix is a square matrix that has its only non-zero elements along the leading
diagonal. A special case of a diagonal matrix is the unit or identity matrix I for which a11 =
a22 = . . . = ann = 1.
(d) A zero or null matrix 0 is a matrix with every element zero.
(e) The transposed matrix AT is the matrix A with rows and columns interchanged, its i, jth
element being aji.
(f) A square matrix A is called a symmetric matrix if AT = A. It is called skew symmetric if
AT = −A.
Equality
The matrices A and B are equal, that is A = B, if they are of the same order m × n and aij
= bij, 1 ≤ i _ m, 1 ≤ j ≤ n
Multiplication by a scalar
If λ is a scalar then the matrix λA has elements λ aij.
Addition
We can only add an m × n matrix A to another m × n matrix B and the elements of the sum
A + B are aij + bij, 1 _ i _ m, 1 _ j _ n
Properties of addition
(1) commutative law: A + B = B + A
(2) associative law: (A + B) + C = A + (B + C)
(3) distributive law: λ(A + B) = λA + λB, λ scalar
Matrix multiplication
If A is an m × p matrix and B a p × n matrix then we define the product C = AB as the m ×
n matrix with elements
Where, i = 1, 2, . . . , m; j = 1, 2, . . . , n
Properties of multiplication
(1) The commutative law is not satisfied in general; that is, in general AB ≠ BA. Order
matters and we distinguish between AB and BA by the terminology: pre-multiplication
of B by A to form AB and post-multiplication of B by A to form BA.
(2) Associative law: A(BC) = (AB)C
(3) If λ is a scalar then (λA)B = A(λB) = λAB
(4) Distributive law over addition:
(A + B)C = AC + BC
A(B + C) = AB + AC
Note the importance of maintaining order of multiplication.
(5) If A is an m × n matrix and if Im and In are the unit matrices of order m and n
respectively then ImA = AIn = A
1.3.4. Determinants
Solution
We will use the second row since it contains a zero entry.
Properties
(1) A(adj A) = |A |I
(2) | adj A | = |A | n−1, n being the order of A
(3) adj (AB) = (adj B)(adj A)
Equation 1.3.5
Properties
(1) If A is non-singular then |A | ≠ 0 and A−1 = (adj A)/|A |.
(2) If A is singular then |A | = 0 and A−1 does not exist.
(3) (AB)−1 = B−1A−1.
Inverse matrix can be determine by:
Equation 1.3.6
in which all the entries below the line are zero, and the leading element, marked *, in each
row above the line is non-zero. The number of non-zero rows in the echelon form is equal to rank
A.
When considering the solution of equations (1.1) we saw that provided the determinant of
the matrix A was not zero we could obtain explicit solutions in terms of the inverse matrix. However,
when we looked at cases with zero determinant the results were much less clear. The idea of the
rank of a matrix helps to make these results more precise. Defining the augmented matrix (A : b)
for (1.1) as the matrix A with the column b added to it then we can state the results of cases (3)
and (4) of Section 1.4 more clearly as follows:
If A and (A : b) have different rank then we have no solution to Ax = b. If the two matrices
have the same rank then a solution exists, and furthermore the solution will contain a number of
free parameters equal to (n−rank A).
In this section we reiterate some definitive statements about the solution of the system
simultaneous linear equations
a11x1 + a12x2 + . . . + a1nxn = b1
a21x1 + a22x2 + . . . + a2nxn = b2
an1x1 + an2x2 + . . . + annxn = bn
or, in matrix notation,
that is,
Ax=b
where A is the matrix of coefficients and x is the vector of unknowns. If b = 0 the equations
are called homogeneous, while if b ≠ 0 they are called nonhomogeneous (or inhomogeneous
). Considering individual cases:
Case (1)
If b ≠ 0 and |A | ≠ 0 then we have a unique solution x = A −1 b .
Case (2)
If b = 0 and |A | ≠ 0 we have the trivial solution x = 0.
Case (3)
If b ≠ 0 and |A| = 0 then we have two possibilities: either the equations are inconsistent and
we have no solution or we have infinitely many solutions.
Case (4)
If b = 0 and |A| = 0 then we have infinitely many solutions.
Case (4) is one of the most important, since from it we can deduce the important result that the
homogeneous equation Ax = 0 has a non-trivial solution if and only if |A| = 0.
Solution:
The determinants are calculated as:
equation 1.5.1
where λ is an eigenvalue of A, and v is the eigenvector of A corresponding to λ. Note that
an eigenvector cannot be a zero vector.
Rewriting Equation:
equation 1.5.2
where the identity matrix I has been inserted to make the two terms in brackets size
compatible. Equation 1.5.2 has a non-trivial (nonzero vector) solution if and only if the coefficient
matrix is singular, that is,
equation 1.5.3
This gives the characteristic equation of matrix A. Since A is n × n, Equation 1.12 has n
roots, which are the eigenvalues of A. The corresponding eigenvector for each λ is obtained by
solving Equation 1.5.2. Since A−λI is singular, it has at least one row dependent on other rows.
Therefore, for each λ, Equation 1.5.2 has infinitely many solutions. A basis of solutions will then
represent all eigenvectors associated with λ.
Solution:
The characteristic equation yields the eigenvalues:
Let the three components of v1 be a, b, c. Then, the above system yields b = 0 and a + c
= 0. This implies there is a free variable, which can be either a or c. Letting a = 1 leads to c = −1,
and consequently the eigenvector associated with λ1 = 0 is determined as
Similarly, the eigenvectors associated with the other two eigenvalues (λ2 = 1, λ3 = 2) will
be obtained as
INSTRUCTIONS: Solve the following problem on matrix analysis. Write your answer and solution
in your notebook. Once you finish answering, submit a picture of your output via google classroom.
Please observe cleanliness to your work.
1) Using the cofactor method, solve the determinant of these matrix:
(a) (b)
b)
b)
b)
REFERENCES
Student Solutions Manual and Study Guide for Numerical Analysis, 9th Edition
Richard L. Burden, Cengage Learning, 2011