1 5 Floating Point Representation
1 5 Floating Point Representation
Consider an old time cash register that would ring any purchase between 0 and 999.99 units
of money. Note that there are five (not six) working spaces in the cash register (the decimal
number is shown just for clarification).
Q: How will the smallest number 0 be represented?
A: The number 0 will be represented as
0 0 0 . 0 0
Q: Now look at any typical number between 0 and 999.99, such as 256.78. How would it be
represented?
A: The number 256.78 will be represented as
2 5 6 . 7 8
01.05.1
01.05.2 Chapter 01.05
0.004
t 100
256.786
0.001558% .
For another number, 3.546, rounding it off to 3.55 accounts for the same round-off
error of 3.546 3.55 0.004 . The relative error in this case is
0.004
t 100
3.546
0.11280% .
Q: If I am interested in keeping relative errors of similar magnitude for the range of numbers,
what alternatives do I have?
A: To keep the relative error of similar order for all numbers, one may use a floating-point
representation of the number. For example, in floating-point representation, a number
256.78 is written as 2.5678 10 2 ,
0.003678 is written as 3.678 10 3 , and
256.789 is written as 2.56789 10 2 .
The general representation of a number in base-10 format is given as
sign mantissa 10 exponent
or for a number y ,
y m 10e
Where
sign of the number, 1 or - 1
m mantissa, 1 m 10
e integer exponent (also called ficand)
Let us go back to the example where we have five spaces available for a number. Let us also
limit ourselves to positive numbers with positive exponents for this example. If we use the
same five spaces, then let us use four for the mantissa and the last one for the exponent. So
the smallest number that can be represented is 1 but the largest number would be 9.999 10 9 .
By using the floating-point representation, what we lose in accuracy, we gain in the range of
numbers that can be represented. For our example, the maximum number represented
changed from 999.99 to 9.999 109 .
What is the error in representing numbers in the scientific format? Take the previous
example of 256.78. It would be represented as 2.568 10 2 and in the five spaces as
2 5 6 8 2
Another example, the number 576329.78 would be represented as 5.763 10 5 and in five
spaces as
5 7 6 3 5
So, how much error is caused by such representation. In representing 256.78, the
round off error created is 256.78 256.8 0.02 , and the relative error is
Floating Point Representation 01.05.3
0.02
t 100 0.0077888 % ,
256.78
In representing 576329.78 , the round off error created is 576329.78 5.763 105 29.78 ,
and the relative error is
29.78
t 100 0.0051672 % .
576329.78
What you are seeing now is that although the errors are large for large numbers, but the
relative errors are of the same order for both large and small numbers.
Example 1
Represent 54.7510 in floating point binary format. Assuming that the number is written to a
hypothetical word that is 9 bits long where the first bit is used for the sign of the number, the
second bit for the sign of the exponent, the next four bits for the mantissa, and the next three
bits for the exponent,
Solution
54.7510 (110110.11)2 1.10110112 2 ( 5 )10
The sign of the number is positive, so the bit for the sign of the number will have zero in it.
0
The sign of the exponent is positive. So the bit for the sign of the exponent will have zero in
it.
The mantissa
m 1011
(There are only 4 places for the mantissa, and the leading 1 is not stored as it is always
expected to be there), and
the exponent
01.05.4 Chapter 01.05
e 101 .
we have the representation as
0 0 1 0 1 1 1 0 1
Example 2
What number does the below given floating point format
0 1 1 0 1 1 1 1 0
represent in base-10 format. Assume a hypothetical 9-bit word, where the first bit is used for
the sign of the number, second bit for the sign of the exponent, next four bits for the mantissa
and next three for the exponent.
Solution
Given
Bit Representation Part of Floating point number
0 Sign of number
1 Sign of exponent
1011 Magnitude of mantissa
110 Magnitude of exponent
Example 3
A machine stores floating-point numbers in a hypothetical 10-bit binary word. It employs
the first bit for the sign of the number, the second one for the sign of the exponent, the next
four for the exponent, and the last four for the magnitude of the mantissa.
a) Find how 0.02832 will be represented in the floating-point 10-bit word.
b) What is the decimal equivalent of the 10-bit word representation of part (a)?
Solution
a) For the number, we have the integer part as 0 and the fractional part as 0.02832
Let us first find the binary equivalent of the integer part
Floating Point Representation 01.05.5
b) Converting the above floating point representation from part (a) to base 10 by following
Example 2 gives
1.11002 20110 2
1 20 1 21 1 22 0 23 0 24 2 0 2
3
1 2 2 1 2 1 0 2 0
1.7510 2610
0.02734375
Q: How do you determine the accuracy of a floating-point representation of a number?
A: The machine epsilon, mach is a measure of the accuracy of a floating point representation
and is found by calculating the difference between 1 and the next number that can be
represented. For example, assume a 10-bit hypothetical computer where the first bit is used
for the sign of the number, the second bit for the sign of the exponent, the next four bits for
the exponent and the next four for the mantissa.
We represent 1 as
0 0 0 0 0 0 0 0 0 0
and the next higher number that can be represented is
0 0 0 0 0 0 0 0 0 1
The difference between the two numbers is
1.00012 2 (0000 )2 1.0000 2 2 (0000 )2
0.00012
(1 2 4 )10
(0.0625)10 .
The machine epsilon is
mach 0.0625 .
The machine epsilon, mach is also simply calculated as two to the negative power of the
number of bits used for mantissa. As far as determining accuracy, machine epsilon, mach is
an upper bound of the magnitude of relative error that is created by the approximate
representation of a number (See Example 4).
Example 4
A machine stores floating-point numbers in a hypothetical 10-bit binary word. It employs
the first bit for the sign of the number, the second one for the sign of the exponent, the next
four for the exponent, and the last four for the magnitude of the mantissa. Confirm that the
magnitude of the relative true error that results from approximate representation of 0.02832
in the 10-bit format (as found in previous example) is less than the machine epsilon.
Solution
From Example 2, the ten-bit representation of 0.02832 bit-by-bit is
0 1 0 1 1 0 1 1 0 0
Again from Example 2, converting the above floating point representation to base-10 gives
Floating Point Representation 01.05.7
1.11002 20110 2
0.02734375 10
The absolute relative true error between the number 0.02832 and its approximate
representation 0.02734375 is
0.02832 0.02734375
t
0.02832
0.034472
which is less than the machine epsilon for a computer that uses 4 bits for mantissa, that is,
mach 2 4
.
0.0625
Q: How are numbers actually represented in floating point in a real computer?
A: In an actual typical computer, a real number is stored as per the IEEE-754 (Institute of
Electrical and Electronics Engineers) floating-point arithmetic format. To keep the
discussion short and simple, let us point out the salient features of the single precision
format.
A single precision number uses 32 bits.
A number y is represented as
y 1.a1 a 2 a 23 2 e
where
= sign of the number (positive or negative)
ai entries of the mantissa, can be only 0 or 1, i 1,..,23
e =the exponent
Note the 1 before the radix point.
The first bit represents the sign of the number (0 for positive number and 1 for a
negative number).
The next eight bits represent the exponent. Note that there is no separate bit for the
sign of the exponent. The sign of the exponent is taken care of by normalizing by
adding 127 to the actual exponent. For example in the previous example, the
exponent was 6. It would be stored as the binary equivalent of 127 6 133 . Why
is 127 and not some other number added to the actual exponent? Because in eight
bits the largest integer that can be represented is 111111112 255 , and halfway of
255 is 127. This allows negative and positive exponents to be represented equally.
The normalized (also called biased) exponent has the range from 0 to 255, and hence
the exponent e has the range of 127 e 128 .
If instead of using the biased exponent, let us suppose we still used eight bits for the
exponent but used one bit for the sign of the exponent and seven bits for the exponent
magnitude. In seven bits, the largest integer that can be represented is
11111112 127 in which case the exponent e range would have been smaller, that
01.05.8 Chapter 01.05