0% found this document useful (0 votes)
92 views30 pages

Computer Organization 2: Lab Tutorial 3 Chapter

This document discusses different methods of data representation in computing including complements, fixed-point representation, and floating-point representation. It covers the basics of 2's complement representation and describes 32-bit and IEEE single and double precision floating point formats. The document provides examples of expressing values in simplified 14-bit and IEEE floating point models and calculating the sum of two values using a 14-bit model. It concludes with a brief mention of error detection codes.

Uploaded by

amir nabil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views30 pages

Computer Organization 2: Lab Tutorial 3 Chapter

This document discusses different methods of data representation in computing including complements, fixed-point representation, and floating-point representation. It covers the basics of 2's complement representation and describes 32-bit and IEEE single and double precision floating point formats. The document provides examples of expressing values in simplified 14-bit and IEEE floating point models and calculating the sum of two values using a 14-bit model. It concludes with a brief mention of error detection codes.

Uploaded by

amir nabil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 30

COMPUTER

EL-SAYED ABDEL-RAHMAN
[email protected]
Aya magdy
ORGANIZATION 2
Lab tutorial 3
[email protected]

Chapter (3)
Agenda Chapter (3) Data Representation

 Complements
 Fixed-Point Representation
 Floating-Point Representation
 Error Detection Codes
3-2 Complements
3-2 Complements
3-3 Fixed-Point Representation
3-3 Fixed-Point Representation
3-3 Fixed-Point Representation
 32-bit floating point format.
 Leftmost bit = sign bit (0 positive or 1 negative).
 Exponent in the next 8 bits. Use a biased representation.
 Final portion of word (23 bits in this example) is the
significand (sometimes called mantissa).

13
3-3 Fixed-Point Representation
3-4 Floating-Point Representation
3-4 Floating-Point Representation

 We introduce a hypothetical “Simple Model” to


explain the concepts
 In this model:
 A floating-point number is 14 bits in length
 The exponent field is 5 bits
 The significand field is 8 bits

16
3-4 Floating-Point Representation
 Example:
 Express 3210 in the simplified 14-bit floating-point
model.
 We know that 32 is 25. So in (binary) scientific
notation 32 = 1.0 x 25 = 0.1 x 26.
 In a moment, we’ll explain why we prefer the
second notation versus the first.
 Using this information, we put 110 (= 610) in the
exponent field and 1 in the significand as shown.

17
3-4 Floating-Point Representation
 Example:
 Express 3210 in the revised 14-bit floating-point
model.
 We know that 32 = 1.0 x 25 = 0.1 x 26.
 To use our excess 16 biased exponent, we add 16 to
6, giving 2210 (=101102).
 So we have:

18
3-4 Floating-Point Representation
 Example:
 Express 0.062510 in the revised 14-bit floating-
point model.
 We know that 0.0625 is 2-4. So in (binary) scientific
notation 0.0625 = 1.0 x 2-4 = 0.1 x 2 -3.
 To use our excess 16 biased exponent, we add 16 to
-3, giving 1310 (=011012).

19
3-4 Floating-Point Representation
 Example:
 Express -26.62510 in the revised 14-bit floating-
point model.
 We find 26.62510 = 11010.1012. Normalizing, we
have: 26.62510 = 0.11010101 x 2 5.
 To use our excess 16 biased exponent, we add 16 to
5, giving 2110 (=101012). We also need a 1 in the sign
bit.

20
3-4 Floating-Point Representation
 The IEEE has established a standard for
floating-point numbers
 The IEEE-754 single precision floating point
standard uses an 8-bit exponent (with a bias of
127) and a 23-bit significand.
 The IEEE-754 double precision standard uses
an 11-bit exponent (with a bias of 1023) and a
52-bit significand.

21
3-4 Floating-Point Representation
 Example: Express -3.75 as a floating point number
using IEEE single precision.
 First, let’s normalize according to IEEE rules:
 3.75 = -11.112 = -1.111 x 21
 The bias is 127, so we add 127 + 1 = 128 (this is our
exponent)
 The first 1 in the significand is implied, so we have:

(implied)

 Since we have an implied 1 in the significand, this equates


to
-(1).1112 x 2 (128 – 127) = -1.1112 x 21 = -11.112 = -3.75.

22
3-4 Floating-Point Representation
 Example:
 Find the sum of 1210 and 1.2510 using the 14-bit
“simple” floating-point model.
 We find 1210 = 0.1100 x 2 4. And 1.2510 = 0.101 x 2 1 =

0.000101 x 2 4.
• Thus, our sum is
0.110101 x 2 4.

23
3-6 Error Detection Codes
3-6 Error Detection Codes
Thanks

You might also like