Chapter 56 CPU Final
Chapter 56 CPU Final
Stack Organization
1
Introduction to Central Processing Unit (CPU)
The part of the computer that performs the bulk of data
processing operations is called the central processing
unit (CPU)
The central processing unit (CPU) of a computer is
the main unit that dictates the rest of the computer
organization
The CPU is made up of three major parts:
Register set
ALU
Control units
2
CPU
1. Register set: Stores intermediate data
during the execution of instructions;
2. Arithmetic logic unit (ALU): Performs
Control
the required micro-operations for Unit
executing the instructions; Arithmetic
3. Control unit: supervises the transfer of Logic Unit
information among the registers and Registers
instructs the ALU as to which operation to
perform by generating control signals.
3
Registers
• In Basic Computer, there is only one general purpose
register, the Accumulator (AC)
• In modern CPUs, there are many general purpose
registers.
• It is advantageous to have many registers:
– Transfer between registers within the processor are
relatively fast
– Going “off the processor” to access memory is much
slower
– Because memory access is the most time consuming
operation in a computer
4
Registers cont…
• There are three types of register organization
• These internal organization of registers (The
three most common CPU organizations) are:
• Single Accumulator Organization
• General Register organization
• Stack organization
5
5.2. General Register Organization
• CPU must have some working space (fast access and close
to CPU)
• This space is efficiently used to store intermediate values
• Intermediate data are needed to be stored like Pointers,
counters, return address, temp results, and partial products.
• Cannot save them in main memory because their access is
time consuming.
• It is more efficient and faster to be stored inside processor.
• So the solution is designing multiple registers inside
processor and connects them through a common bus.
6
5.2. General Register Organization
Bus organization for 7 CPU registers:
These 7 registers are connected through the a
8
5.2. General Register Organization
12
5.3 Stack Organization
A useful feature that is included in the CPU of most
computers is a stack or last-in, first-out(LIFO) list
Stack: A storage device that stores information in such a
manner that the item stored last is the first item
retrieved.
The operation of a stack can be compared to a stack of
trays. The last tray placed on top of the stack is the first
to be taken off.
13
5.3 Stack Organization
Stack pointer (SP): A register that holds the address of the
top item in the stack.
SP always points at the top item in the stack
There are exactly two operations that can be performed on a
stack.
Push (Push-down): it is the result of pushing a new item on top
14
Register Stack
• A stack can be organized as a
collection of a finite number
of registers.
• In a 64-word stack, the stack
pointer contains 6 bits.
• The one-bit register
• FULL is set to 1 when the
stack is full;
• EMPTY register is 1 when the
stack is empty.
• The data register DR holds
the data to be written into or
read from the stack.
15
The following are the micro-operations
associated with the stack
• Three items are placed in the stack: A,B, and C in that order
• The push operation is implemented with the following
sequence of microoperation.
16
The following are the micro-operations
associated with the stack
• A new item is deleted from the stack if the stack is not
empty.
• The pop operation consists of the following sequence of
micro-operations.
17
Reverse Polish Notation (RPN)
• A stack organization is very effective for evaluating
arithmetic expressions
• The common mathematical method of writing arithmetic
expressions imposes difficulties when evaluated by a
computer
• The common arithmetic expressions are written in infix
notation, with each operator written between the operands.
• Reverse polish notation : is a postfix notation (places
operators after operands)
• Example : Consider the following expression:
• A+B Infix notation
• +AB Prefix or polish notation
• AB+ Postfix or Reverse Polish notation
18
Reverse Polish Notation (RPN)
• The reverse Polish notation is in a form suitable for stack
manipulation.
• The following expressions are written in reverse polish notation
as:
• A * B + C * D (AB *)+(CD *) AB * CD * +
• ( 3 * 4 ) + ( 5 * 6 ) 34 * 56 * +
Evaluation procedure:
1. Scan the expression from left to right.
2. When an operator is reached, perform the operation with the
two operands found on the left side of the operator.
3. Replace the two operands and the operator by the result
obtained from the operation.
19
Reverse Polish Notation (RPN)
• Example: consider the following expression
• Infix (3 * 4) + (5 * 6) = 42
• Reverse polish notation: 3 4 * 5 6 * +
stack evaluation:
• Get value
• If value is data: push data
• Else if value is operation: pop, pop evaluate and push.
n12 5 6 * +
12 30 +
42
20
Reverse Polish Notation (RPN)
• Reverse polish notation : 3 4 * 5 6 * +
• Consider the stack operation below:
• First the number 3 is pushed to the stack then the number 4
• The next symbol is the multiplication operator *
• This causes a multiplication of the two topmost items in the stack
• The stack is then popped and the product is placed on top of the stack,
replacing the two original operands.
21
5.4 Complex Instruction Set Computers: CISC
• A computer with large number of instructions is called
complex instruction set computer or CISC.
• Complex instruction set computer is mostly used in scientific
computing applications requiring lots of floating point
arithmetic.
CISC Characteristics
• A large number of instructions - typically from 100 to 250
instructions.
• Some instructions that perform specialized tasks and are used
infrequently.
• A large variety of addressing modes - typically 5 to 20
different modes.
• Variable-length instruction formats
• Instructions that manipulate operands in memory.
22
Reduced Instruction Set Computers: RISC
• A computer with few instructions and simple construction
is called reduced instruction set computer or RISC.
• RISC architecture is simple and efficient.
The major characteristics of RISC architecture are:
– Relatively few instructions
– Relatively few addressing modes
– Memory access limited to load and store instructions
– All operations are done within the registers of the CPU
– Fixed-length and easily-decoded instruction format.
23
Chapter 6: Memory Organization
Sub topics includes:
– Memory Hierarchy
– Main Memory
– External Memory
– Cache Memory
24
6. Memory Organization
The memory unit is an essential component in any digital
computer since it is needed for storing programs and
data
No one technology is optimal in satisfying the memory
requirements for a computer system.
It exhibits widest range of type, technology, organization,
performance and cost.
Not all accumulated information is needed by the CPU at
the same time
Therefore, it is more economical to use low-cost storage
devices to serve as a backup for storing the information
that is not currently used by CPU
25
6. Memory Organization
Some characteristics of Memory Systems
Location: Refers to whether it is internal or external to
computer
Internal: Directly accessible by CPU
main memory, cache, registers
External: Accessible by CPU through I/O module
Access method
Sequential access: Access to records is made in a
specific linear sequence eg. Magnetic tape
Random access: the storage locations can be accessed in
any order. e.g. RAM and ROM
26
6. Memory Organization
Some characteristics of Memory Systems
Performance:
The average time required to reach a storage location in
memory and obtain its contents is called the access time
Access time (latency) is the time between "requesting"
data and getting it
The access time = seek time + transfer time
Seek time: time required to position the read-write
head to a location
Transfer time: time required to transfer data to or
27
6.1 Memory Hierarchy
The design constraints on a computer’s memory can
be summed up by three questions:
− How much is the capacity? Storage capacity
− How fast? Speed
− How expensive? Cost
A variety of technologies are used to implement
memory systems, and across this spectrum of
technologies, the following relationships hold:
− Faster access time, greater cost per bit
− Greater capacity, smaller cost per bit
− Greater capacity, slower access time
28
6.1 Memory Hierarchy
• The total memory of a computer system is organized as
the hierarchy of memories as shown below.
31
6.1 Memory Hierarchy
CPU logic is usually faster than main memory access
time, with the result that processing speed is limited
primarily by the speed of main memory
The cache is used for storing segments of programs
currently being executed in the CPU and temporary
data frequently needed in the present calculations
The typical access time ratio between cache and main
memory is about 1 to 7
Auxiliary memory access time is usually 1000 times
that of main memory
32
6.2 Main Memory
The memory unit that communicates directly with CPU is
called main memory/Primary memory.
Most of the main memory in a general purpose
computer is made up of RAM integrated circuits chips,
but a portion of the memory may be constructed with
ROM chips
33
6.2 Main Memory
Random-Access Memory (RAM)
Static RAM (SRAM)
Each cell stores bit with a six-transistor circuit.
noise.
Faster and more expensive than DRAM and used for cache
memory
Dynamic RAM (DRAM)
Each cell stores bit with a capacitor and transistor.
35
6.3 Auxiliary Memory/External Memory
Storage devices that provide backup storage are called
auxiliary/external memory/secondary memory.
• Used to store programs and data which are not needed
immediately by the CPU, large data files, backup data
• Not urgently needed data are stored here.
It is non volatile
Common used secondary memory includes: Magnetic disks,
Optical disks, Magnetic tapes, etc.
36
6.4 Cache memory
A cache memory is a small capacity memory , very
fast memory that retains copies of recently used
information from main memory.
Is a reserved section of main memory (RAM) that is
used to enhance performance of a CPU.
If the active portions of the program and data are
placed in a fast small memory, the average memory
access time can be reduced,
Thus reducing the total execution time of the program
Such a fast small memory is referred to as cache
memory
The cache is one of the fastest component in the
memory hierarchy and approaches the speed of CPU
component
37
6.4 Cache memory
It is located between the main memory and the CPU
When CPU needs to access memory, the cache is examined
If the word is found in the cache, it is read from the fast
memory (cache memory)
If the word addressed by the CPU is not found in the
cache, the main memory is accessed to read the word
The basic characteristic of cache memory is its fast
access time
38