0% found this document useful (0 votes)
14 views

AICT Lecture 3

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

AICT Lecture 3

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 57

DECISION-MAKING

• Computers can make decisions, and computers can


do things very very fast. Right now, a computer is
deciding what the solution to a mathematical
equation is. Somewhere else, a computer is deciding
whether to suspend someone's credit card to protect
them from fraud, and another computer is deciding
whether an image represents a stop sign or a bird.
• An important part of computer science is
understanding how computers can make the right
decisions, or at least pretty good ones.
• One of the ways computers (and sometimes humans)
make decisions is with a structure called a decision
tree. Decision trees encode a series of simple
yes-or-no questions that you can follow in order to
answer a more complex question. Here is a silly
decision tree that helps you decide which of eight
different creatures you're dealing with:
INTERFACES
• In computer science, we call the menu that an abstraction
offers its interface.
• In computer science, the interface is often called an API. That
officially stands for Application Programming Interface.
However, the only important letter to remember is the "I,"
standing for "interface."
• What is interface in computer science?
• Interfaces are points of communication
between different components of an
application or system.
CHAPTER 1
Data Storage (& Representation)
Bits and Bit Patterns
• Bit: Binary Digit (0 or 1)
• Bit Patterns are used to represent information
– Numbers
– Text characters
– Images
– Sound
– And others

1-9
Boolean Operations
• Boolean Operation: An operation that
manipulates one or more true/false values
• Specific operations
– AND
– OR
– XOR (exclusive or)
– NOT

1-10
Figure 1.1 The possible input and output values of
Boolean operations AND, OR, and XOR (exclusive or)

1-11
Gates
• Gate: A device that computes a Boolean
operation
– Often implemented as (small) electronic circuits
– Provide the building blocks from which computers
are constructed
– VLSI (Very Large Scale Integration)

1-12
Figure 1.2 A pictorial representation of AND, OR, XOR,
and NOT gates as well as their input and output values

1-13
Flip-flops
• Flip-flop: A circuit built from gates that can
store one bit.
– One input line is used to set its stored value to 1
– One input line is used to set its stored value to 0
– While both input lines are 0, the most recently
stored value is preserved

1-14
Figure 1.3 A simple flip-flop circuit

1-15
Figure 1.4 Setting the output of a
flip-flop to 1

1-16
Figure 1.4 Setting the output of a
flip-flop to 1 (continued)

1-17
Figure 1.4 Setting the output of a
flip-flop to 1 (continued)

1-18
Figure 1.5 Another way of constructing
a flip-flop

1-19
Hexadecimal Notation
• Hexadecimal notation: A shorthand notation
for long bit patterns
– Divides a pattern into groups of four bits each
– Represents each group by a single symbol
• Example: 10100011 becomes A3

1-20
Figure 1.6 The hexadecimal coding
system

1-21
Main Memory Cells
• Cell: A unit of main memory (typically 8 bits which is
one byte)
– Most significant bit: the bit at the left (high-order)
end of the conceptual row of bits in a memory cell
– Least significant bit: the bit at the right
(low-order) end of the conceptual row of bits in a
memory cell

1-24
Figure 1.7 The organization of a
byte-size memory cell

1-25
Main Memory Addresses
• Address: A “name” that uniquely identifies one cell
in the computer’s main memory
– The names are actually numbers.
– These numbers are assigned consecutively
starting at zero.
– Numbering the cells in this manner associates an
order with the memory cells.

1-26
Figure 1.8 Memory cells arranged by
address

1-27
Memory Terminology
• Random Access Memory (RAM): Memory in
which individual cells can be easily accessed in
any order
• Dynamic Memory (DRAM): RAM composed of
volatile memory

1-28
Measuring Memory Capacity
• Kilobyte: 210 bytes = 1024 bytes
– Example: 3 KB = 3 times1024 bytes

• Megabyte: 220 bytes = 1,048,576 bytes


– Example: 3 MB = 3 times 1,048,576 bytes

• Gigabyte: 230 bytes = 1,073,741,824 bytes


– Example: 3 GB = 3 times 1,073,741,824 bytes

1-29
Mass Storage
• Additional devices:
– Magnetic disks – Magnetic tape
– CDs – Flash drives
– DVDs
– Solid-state disks
• Advantages over main memory
– Less volatility
– Larger storage capacities
– Low cost
– In many cases can be removed

1-31
Figure 1.9 A magnetic disk storage
system

1-32
Figure 1.10 CD storage

1-33
Flash Drives
• Flash Memory – circuits that traps electrons in
tiny silicon dioxide chambers
• Repeated erasing slowly damages the media
• Mass storage of choice for:
– Digital cameras
• SD Cards provide GBs of storage
– Smartphones

1-34
Representing Text
• Each character (letter, punctuation, etc.) is assigned
a unique bit pattern.
– ASCII: Uses patterns of 7-bits to represent most
symbols used in written English text
– ISO developed a number of 8 bit extensions to
ASCII, each designed to accommodate a major
language group
– Unicode: Uses patterns up to 21-bits to represent
the symbols used in languages world wide, 16-bits
for world’s commonly used languages

1-35
Figure 1.11 The message “Hello.” in
ASCII or UTF-8 encoding

1-36
Representing Numeric Values
• Binary notation: Uses bits to represent a
number in base two
• Limitations of computer representations of
numeric values
– Overflow: occurs when a value is too big to be
represented
– Truncation: occurs when a value cannot be
represented accurately

1-37
The Binary System
The traditional decimal system is based
on powers of ten.

The Binary system is based on powers


of two.

1-38
Figure 1.13 The base ten and binary
systems

1-39
Figure 1.14 Decoding the binary
representation 100101

1-40
Figure 1.15 An algorithm for finding the binary
representation of a positive integer

1-41
Figure 1.16 Applying the algorithm in Figure 1.15
to obtain the binary representation of thirteen

1-42
Figure 1.17 The binary addition facts

1-43
Figure 1.18 Decoding the binary
representation 101.101

1-44
Storing Integers
• Two’s complement notation: The most
popular means of representing integer values
• Excess notation: Another means of
representing integer values
• Both can suffer from overflow errors

1-45
Figure 1.19 Two’s complement
notation systems

1-46
Figure 1.20 Coding the value -6 in two’s
complement notation using four bits

1-47
Figure 1.21 Addition problems converted to
two’s complement notation

1-48
Figure 1.22 An excess eight conversion
table

1-49
Figure 1.23 An excess notation system using bit
patterns of length three

1-50
Storing Fractions
• Floating-point Notation: Consists of a sign bit,
a mantissa field, and an exponent field.
• Related topics include
– Normalized form
– Truncation errors

1-51
Figure 1.24 Floating-point notation
components

1-52
5
Figure 1.25 Encoding the value 2 ⁄8

1-53
Communication Errors
• Parity bits (even versus odd)
• Checkbytes
• Error correcting codes

1-54
Figure 1.26 The ASCII codes for the letters A and
F adjusted for odd parity

1-55
Figure 1.27 An error-correcting code

1-56
Figure 1.28 Decoding the pattern 010100 using
the code in Figure 1.27

1-57

You might also like