0% found this document useful (0 votes)
19 views

macm316_week1

The document outlines the course structure for MACM 316: Numerical Analysis I, including assessment methods, resources for success, and key topics such as error measurement and floating point representation. It emphasizes the importance of accuracy, efficiency, and robustness in numerical algorithms, along with practical applications of these concepts. Additionally, it provides details on class schedules, office hours, and recommended resources for students.

Uploaded by

nezurumin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

macm316_week1

The document outlines the course structure for MACM 316: Numerical Analysis I, including assessment methods, resources for success, and key topics such as error measurement and floating point representation. It emphasizes the importance of accuracy, efficiency, and robustness in numerical algorithms, along with practical applications of these concepts. Additionally, it provides details on class schedules, office hours, and recommended resources for students.

Uploaded by

nezurumin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

MACM 316

Numerical
Analysis I
Week 1

Jane Shaw MacDonald


Course Outline
• CANVAS - monitor regularly, email me immediately if you notice any errors, broken links,
missing assignments, etc.

• Refer to syllabus and canvas before emailing me


• Lectures - are in class MWF between 1030 and 1120. EXCEPTION: three recorded lectures
which will be posted on canvas. These lectures are Monday June 24, Wednesday June 26,
and Wednesday July 24. You are not required to come into class on those three days.

• Textbook - technically, 10th ed. (9th edition is good too, and I heard you can nd it online.
Warning: section numbers match, pages/exercises/examples don’t! )

• The textbook is a great learning tool and will strongly aid you in your studies throughout
this course

fi
Assessment
• Quizzes (20%)
- on Friday’s (almost every week), 10 total. Best of 8. Each quiz will have three questions, either
multiple choice or short answer.
- a strict 15 mins. (If you keep writing after 15 minutes, your quiz will be thrown out)
- questions will be inspired by the course content from the previous week

• Computational Assignments (CA) (15%)


- 5 computational assignments due almost every second Monday. To pass this course you
must have 50% on you CA!
- use the computational o ce hours with TAs, this is a great place to work on these assignments
- all the programming will be in MATLAB.

• Midterm (20%) - JUNE 14 2023 IN CLASS


• Final (45%) - date to be set by administration

3
ffi
Resources for success

• HIGHLY Suggested Practise Problems - will be posted on canvas throughout


the semester.

• Computational O ce Hours - M/F 14:30 - 17:20, lead by TAs, in person,


WMC 2820, Burnaby

• Tutorials - lead by TAs. FANTASTIC resource.


• Lectures - I am not posting complete notes online
• O ce Hours: W 16:00 - 17:00, WMC 2830, Burnaby

4
ffi
ffi
Why Numerical Analysis?

5
Components of Numerical Analysis

6
Criteria for judging algorithms
1. Accuracy
- the complexity of real problems introduces error
- error arises from the modelling, round-o ( nite precision arithmetic), analytical error (approximation
functions like Taylor series, derivative approximations with di erences, etc…)
- exact error is hard to come by ——> error estimation
- representing error (relative, absolute, …)

2. E ciency
- measuring the cost of the algorithm, computational e ort
- total work = number of steps * work per step
- how many iterations are needed ——> convergence & stopping criteria
- type of processor

3. Robustness
- “the ability to withstand adverse conditions or rigorous testing” (Dictionary.com) “the ability of an
[algorithm] to cope with errors during execution and cope with erroneous input” (wikipedia)
- when does the algorithm provide us a good result, when does it not?
- How badly can it fail?
- How does output depend on input ——> stability

7
ffi
ff
fi
ff
ff
Examples of Error
• Modelling error

• Round-o error

• Analytical error

8
ff
Big-O Notation
• De nition 1 (small h):

• De nition 2 (large n):

• Examples:

9
fi
fi
Round-Off Error
(Section 1.2)

• Two types
➡ Rounding: round “up” (to next closest number) if the digit following the last
signi cant digit is ≥ 5, and round “down” otherwise.

➡ Chopping: throw away all insigni cant digits. This can be viewed as always
rounding toward zero.

• Example: On a (hypothetical) decimal computer that stores ve signi cant digits


π = 3.1415 | 92653589793 ... is represented as

✴ round(π) =
✴chop(π) =

• It doesn’t matter if it’s negative, i.e., round(-π) = chop(-π) =

10
fi
fi
fi
fi
Round-off Errors Examples:
slate.com: the deadly consequences of rounding errors

• Penny-shaving schemes —> O ce Space, Superman III, Vancouver Stock


Exchange (similar error)

• Explosions —> February 1991 Gulf War, 28 dead, many more injured.
—> European Space Agency Ariane 5, loss of $500 million

• Politics —> Parliamentary elections in Germany.

11
ffi
Significant Digits (sig-figs)

• A simple de nition: the digits known to be accurate.


• When reporting on oating point numbers 10.17 = 0.1017 × 102 means
something di erent from 10.1700 = 0.101700 × 102. The rst having 4 sig- gs,
the second having 6.

• The numeric value is the same but the second is more accurately measured. We
can trust this measure up to the 6th digit (think about this in your assignments)

• Exercise: How many signi cant digits are in each oating point number?
0.000017; 0.00017305; 17035; 17035.0; 17000

12
fi
ff
fl
fi
fl
fi
fi
Normalised Floating Point
• Floating point representation of a real number x is
±
fl(x) = 0.d1d2d3 . . . dk × B n

where k is the number of signi cant digits,

di are digits from the set {0, 1, …, B-1},


B is the base,
n is an exponent with −m < n ≤ M
fl(x) is normalised if d1 ≠ 0
• Binary is cumbersome and obscures calculations. Instead, we often illustrate ideas surrounding error
with normalised decimal representation.
XXXXXXXXXXXXX

• Note: This format (base B = 10) is what we’ll use in practice. It does di er from the IEEE standard
“1.d d ... ”
1 2

13
fi
ff
Examples:
• Convert to normalised oating point form with k = 4 and B = 10.
fl(27.39) =

fl(−0.00124) =

fl(3700) =

14
fl
Perform arithmetic operations using floating point
(k = 3, B = 10)

• fl(1.37 + 0.0269)

• fl(4850 − 4820)

• fl(3780 + 0.321)

15
Measuring Error Absolute error ignores size
and can sometimes be
misleading.
De nition: Suppose x ̂ is an approximation of x. Then
Ex = | x − x ̂ | is the absolute error and
Ex
Rx = is the relative error (x ≠ 0)
|x|

The number p* is said to approximate p to t signi cant digits if t


Is the largest nonnegative integer such for which
| p − p* | −t
≤ 5 × 10
p

For For a base-10 (decimal) oating point number x with k signi cant digits:

{0.5 × 101−k,
| x − fl(x) | 1−k
10 , chopping,
Rx = ≤u=
|x| rounding .

Here u is called the unit round-o error.


16
fi
fl
ff
fi
fi
Derive u for chopping:

17
Properties of absolute and relative errors
• Absolute error (Ex) is ne to use as long as | x | ≈ 1, but NOT if | x | is very
large or very small.

• Relative error (Rx) can be viewed as a percentage error

x ̂ approximate x up to how many sig gs?

18
fi
fi
“Large” Example: Factorials and Stirling’s Formula
• n! becomes notoriously HUGE as n gets larger
• James Stirling (1692-1770) discovered the famous approximation formula

(e)
n
n
n! ≈ 2πn

• How good is Stirling’s approximation? —> try out MATLAB code: stirling.m.

• WHAT DO YOU LEARN?

19
Measuring Error
• From the unit round-o error we see that fl(x) = x(1 + ϵ) for some
|ϵ| ≤ u .
• It’s always desirable that for any mathematics operator “op”
fl(x op y) = (x op y)(1 + ϵ), for some | ϵ | ≤ u .
• This is true when op =∗, ÷. But not for +,−.

Conclude: Operations +, − require careful consideration in numerical


algorithms!!

20
ff
Subtractive Cancellation Errors
• Consider the sum 472635 + 27.5013 − 472630 with exact value
472635.0000 + 27.5013 − 472630.0000 = 32.5013

• Estimate it using k = 6 signi cant digits of gures:

• The end result has how many signi cant digits of accuracy?

21
fi
fi
fi
A floating point example:

• Using 3-digit decimal oating point with rounding compute x∗y: x =


0.253×103, y = 0.456×103.

22
fl
IEEE Floating Point Standard
How numbers are actually stored in hardware
• Terminology: a sequence of 8 bits is called a byte.
• Binary, single precision, oating point numbers (with 4 bytes or 32 bits) are
s c−127
represented in the form x = (−1) 2 (1.f )
• The binary digits of s, c, f are stored as
1 8 23
s c f
sign exponent mantissa OR fractional part
s c−1023
• The IEEE double precision standard uses 8 bytes (64 bits): x = (−1) 2 (1.f )
1 11 52
s c f
23
fl
Hexidecimal Binary Conversion
• Base-2 binary representation can be unwieldy (64 bits for double precision!)
• Base-16 (hexadecimal or “hex”) provides a convenient short form for binary
4
bit strings because 16 = 2 . Why?

• Simple example: Consider the 16-bit (2-byte) integer


15 13 12 11 10
1011111000010100 == > [ 2 + 2 + 2 + 2 + 2 + 29 + 24 + 22=48660]

• Divide into 4-bit “nibbles”. 1011 | 1110 | 0001 | 0100


• Note: each nibble is an integer in [0, 15] . . . a hex digit!
• Use the table to write the equivalent hex form:

24
Example:
Write +710 in IEEE single precision, using hex short form.
s c−127
• First, convert +7 to IEEE binary form: +7 = (−1) × 2 × 1.f
• Mantissa: is 1.f = we must factor out largest power of 2 less than 7

• Compare to IEEE form:

• Next, convert each into binary:

• Patch it together to obtain the 32-bit IEEE representation:


25

• Regroup into nibbles and convert to hex:

• Exercise: Work through a few other conversions:

26
Conversion to IEEE Format
Example: Convert x = 5.2 to IEEE single precision format:
• First, write integer part in binary as 510 = 1012.
• Second, deal with fractional part 0.2:

• Combine the two results 5.210 = 101.00112 and “shift”

• Convert the exponent 22 = 2129 - 127 so that c = 12910= 100000012

• Then the actual bits can depicted graphically as

27
• How accurate is this oating point representation of 5.210?
• Convert mantissa to decimal by summing all nonzero digits in f:

• Finally, multiply by 22: fl(5.2) = 5.19999980926514

| 5.2 − fl(5.2) | −8
• Relative error is Rx = ≈ 3.7 × 10 .
5.2
• Compare to unit round-o error (t = 23):

28
fl
ff

You might also like