Computer Organization 2: Lab Tutorial 3 Chapter
Computer Organization 2: Lab Tutorial 3 Chapter
EL-SAYED ABDEL-RAHMAN
[email protected]
Aya magdy
ORGANIZATION 2
Lab tutorial 3
[email protected]
Chapter (3)
Agenda Chapter (3) Data Representation
Complements
Fixed-Point Representation
Floating-Point Representation
Error Detection Codes
3-2 Complements
3-2 Complements
3-3 Fixed-Point Representation
3-3 Fixed-Point Representation
3-3 Fixed-Point Representation
32-bit floating point format.
Leftmost bit = sign bit (0 positive or 1 negative).
Exponent in the next 8 bits. Use a biased representation.
Final portion of word (23 bits in this example) is the
significand (sometimes called mantissa).
13
3-3 Fixed-Point Representation
3-4 Floating-Point Representation
3-4 Floating-Point Representation
16
3-4 Floating-Point Representation
Example:
Express 3210 in the simplified 14-bit floating-point
model.
We know that 32 is 25. So in (binary) scientific
notation 32 = 1.0 x 25 = 0.1 x 26.
In a moment, we’ll explain why we prefer the
second notation versus the first.
Using this information, we put 110 (= 610) in the
exponent field and 1 in the significand as shown.
17
3-4 Floating-Point Representation
Example:
Express 3210 in the revised 14-bit floating-point
model.
We know that 32 = 1.0 x 25 = 0.1 x 26.
To use our excess 16 biased exponent, we add 16 to
6, giving 2210 (=101102).
So we have:
18
3-4 Floating-Point Representation
Example:
Express 0.062510 in the revised 14-bit floating-
point model.
We know that 0.0625 is 2-4. So in (binary) scientific
notation 0.0625 = 1.0 x 2-4 = 0.1 x 2 -3.
To use our excess 16 biased exponent, we add 16 to
-3, giving 1310 (=011012).
19
3-4 Floating-Point Representation
Example:
Express -26.62510 in the revised 14-bit floating-
point model.
We find 26.62510 = 11010.1012. Normalizing, we
have: 26.62510 = 0.11010101 x 2 5.
To use our excess 16 biased exponent, we add 16 to
5, giving 2110 (=101012). We also need a 1 in the sign
bit.
20
3-4 Floating-Point Representation
The IEEE has established a standard for
floating-point numbers
The IEEE-754 single precision floating point
standard uses an 8-bit exponent (with a bias of
127) and a 23-bit significand.
The IEEE-754 double precision standard uses
an 11-bit exponent (with a bias of 1023) and a
52-bit significand.
21
3-4 Floating-Point Representation
Example: Express -3.75 as a floating point number
using IEEE single precision.
First, let’s normalize according to IEEE rules:
3.75 = -11.112 = -1.111 x 21
The bias is 127, so we add 127 + 1 = 128 (this is our
exponent)
The first 1 in the significand is implied, so we have:
(implied)
22
3-4 Floating-Point Representation
Example:
Find the sum of 1210 and 1.2510 using the 14-bit
“simple” floating-point model.
We find 1210 = 0.1100 x 2 4. And 1.2510 = 0.101 x 2 1 =
0.000101 x 2 4.
• Thus, our sum is
0.110101 x 2 4.
23
3-6 Error Detection Codes
3-6 Error Detection Codes
Thanks