macm316_week1
macm316_week1
Numerical
Analysis I
Week 1
• Textbook - technically, 10th ed. (9th edition is good too, and I heard you can nd it online.
Warning: section numbers match, pages/exercises/examples don’t! )
• The textbook is a great learning tool and will strongly aid you in your studies throughout
this course
fi
Assessment
• Quizzes (20%)
- on Friday’s (almost every week), 10 total. Best of 8. Each quiz will have three questions, either
multiple choice or short answer.
- a strict 15 mins. (If you keep writing after 15 minutes, your quiz will be thrown out)
- questions will be inspired by the course content from the previous week
3
ffi
Resources for success
4
ffi
ffi
Why Numerical Analysis?
5
Components of Numerical Analysis
6
Criteria for judging algorithms
1. Accuracy
- the complexity of real problems introduces error
- error arises from the modelling, round-o ( nite precision arithmetic), analytical error (approximation
functions like Taylor series, derivative approximations with di erences, etc…)
- exact error is hard to come by ——> error estimation
- representing error (relative, absolute, …)
2. E ciency
- measuring the cost of the algorithm, computational e ort
- total work = number of steps * work per step
- how many iterations are needed ——> convergence & stopping criteria
- type of processor
3. Robustness
- “the ability to withstand adverse conditions or rigorous testing” (Dictionary.com) “the ability of an
[algorithm] to cope with errors during execution and cope with erroneous input” (wikipedia)
- when does the algorithm provide us a good result, when does it not?
- How badly can it fail?
- How does output depend on input ——> stability
7
ffi
ff
fi
ff
ff
Examples of Error
• Modelling error
• Round-o error
• Analytical error
8
ff
Big-O Notation
• De nition 1 (small h):
• Examples:
9
fi
fi
Round-Off Error
(Section 1.2)
• Two types
➡ Rounding: round “up” (to next closest number) if the digit following the last
signi cant digit is ≥ 5, and round “down” otherwise.
➡ Chopping: throw away all insigni cant digits. This can be viewed as always
rounding toward zero.
✴ round(π) =
✴chop(π) =
10
fi
fi
fi
fi
Round-off Errors Examples:
slate.com: the deadly consequences of rounding errors
• Explosions —> February 1991 Gulf War, 28 dead, many more injured.
—> European Space Agency Ariane 5, loss of $500 million
11
ffi
Significant Digits (sig-figs)
• The numeric value is the same but the second is more accurately measured. We
can trust this measure up to the 6th digit (think about this in your assignments)
• Exercise: How many signi cant digits are in each oating point number?
0.000017; 0.00017305; 17035; 17035.0; 17000
12
fi
ff
fl
fi
fl
fi
fi
Normalised Floating Point
• Floating point representation of a real number x is
±
fl(x) = 0.d1d2d3 . . . dk × B n
• Note: This format (base B = 10) is what we’ll use in practice. It does di er from the IEEE standard
“1.d d ... ”
1 2
13
fi
ff
Examples:
• Convert to normalised oating point form with k = 4 and B = 10.
fl(27.39) =
fl(−0.00124) =
fl(3700) =
14
fl
Perform arithmetic operations using floating point
(k = 3, B = 10)
• fl(1.37 + 0.0269)
• fl(4850 − 4820)
• fl(3780 + 0.321)
15
Measuring Error Absolute error ignores size
and can sometimes be
misleading.
De nition: Suppose x ̂ is an approximation of x. Then
Ex = | x − x ̂ | is the absolute error and
Ex
Rx = is the relative error (x ≠ 0)
|x|
For For a base-10 (decimal) oating point number x with k signi cant digits:
{0.5 × 101−k,
| x − fl(x) | 1−k
10 , chopping,
Rx = ≤u=
|x| rounding .
17
Properties of absolute and relative errors
• Absolute error (Ex) is ne to use as long as | x | ≈ 1, but NOT if | x | is very
large or very small.
18
fi
fi
“Large” Example: Factorials and Stirling’s Formula
• n! becomes notoriously HUGE as n gets larger
• James Stirling (1692-1770) discovered the famous approximation formula
(e)
n
n
n! ≈ 2πn
• How good is Stirling’s approximation? —> try out MATLAB code: stirling.m.
19
Measuring Error
• From the unit round-o error we see that fl(x) = x(1 + ϵ) for some
|ϵ| ≤ u .
• It’s always desirable that for any mathematics operator “op”
fl(x op y) = (x op y)(1 + ϵ), for some | ϵ | ≤ u .
• This is true when op =∗, ÷. But not for +,−.
20
ff
Subtractive Cancellation Errors
• Consider the sum 472635 + 27.5013 − 472630 with exact value
472635.0000 + 27.5013 − 472630.0000 = 32.5013
• The end result has how many signi cant digits of accuracy?
21
fi
fi
fi
A floating point example:
22
fl
IEEE Floating Point Standard
How numbers are actually stored in hardware
• Terminology: a sequence of 8 bits is called a byte.
• Binary, single precision, oating point numbers (with 4 bytes or 32 bits) are
s c−127
represented in the form x = (−1) 2 (1.f )
• The binary digits of s, c, f are stored as
1 8 23
s c f
sign exponent mantissa OR fractional part
s c−1023
• The IEEE double precision standard uses 8 bytes (64 bits): x = (−1) 2 (1.f )
1 11 52
s c f
23
fl
Hexidecimal Binary Conversion
• Base-2 binary representation can be unwieldy (64 bits for double precision!)
• Base-16 (hexadecimal or “hex”) provides a convenient short form for binary
4
bit strings because 16 = 2 . Why?
24
Example:
Write +710 in IEEE single precision, using hex short form.
s c−127
• First, convert +7 to IEEE binary form: +7 = (−1) × 2 × 1.f
• Mantissa: is 1.f = we must factor out largest power of 2 less than 7
26
Conversion to IEEE Format
Example: Convert x = 5.2 to IEEE single precision format:
• First, write integer part in binary as 510 = 1012.
• Second, deal with fractional part 0.2:
27
• How accurate is this oating point representation of 5.210?
• Convert mantissa to decimal by summing all nonzero digits in f:
| 5.2 − fl(5.2) | −8
• Relative error is Rx = ≈ 3.7 × 10 .
5.2
• Compare to unit round-o error (t = 23):
28
fl
ff