0% found this document useful (0 votes)
8 views

Chapter 56 CPU Final

Uploaded by

shewakass
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Chapter 56 CPU Final

Uploaded by

shewakass
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

Chapter 5

Introduction to Central Processing Unit (CPU)


 Sub topics:
 Introduction to CPU

 Organization of CPU registers

 Single Register Organization

 General Register Organization

 Stack Organization

 Reduced Instruction Set Computer (RISC)

1
Introduction to Central Processing Unit (CPU)
 The part of the computer that performs the bulk of data
processing operations is called the central processing
unit (CPU)
 The central processing unit (CPU) of a computer is
the main unit that dictates the rest of the computer
organization
 The CPU is made up of three major parts:
 Register set

 ALU

 Control units

2
CPU
 1. Register set: Stores intermediate data
during the execution of instructions;
 2. Arithmetic logic unit (ALU): Performs
Control
the required micro-operations for Unit
executing the instructions; Arithmetic
 3. Control unit: supervises the transfer of Logic Unit
information among the registers and Registers
instructs the ALU as to which operation to
perform by generating control signals.

3
Registers
• In Basic Computer, there is only one general purpose
register, the Accumulator (AC)
• In modern CPUs, there are many general purpose
registers.
• It is advantageous to have many registers:
– Transfer between registers within the processor are
relatively fast
– Going “off the processor” to access memory is much
slower
– Because memory access is the most time consuming
operation in a computer

4
Registers cont…
• There are three types of register organization
• These internal organization of registers (The
three most common CPU organizations) are:
• Single Accumulator Organization
• General Register organization
• Stack organization

5
5.2. General Register Organization
• CPU must have some working space (fast access and close
to CPU)
• This space is efficiently used to store intermediate values
• Intermediate data are needed to be stored like Pointers,
counters, return address, temp results, and partial products.
• Cannot save them in main memory because their access is
time consuming.
• It is more efficient and faster to be stored inside processor.
• So the solution is designing multiple registers inside
processor and connects them through a common bus.

6
5.2. General Register Organization
 Bus organization for 7 CPU registers:
 These 7 registers are connected through the a

common bus system


 2 MUX: select one of 7 register or external data input by
SELA and SELB
 BUS A and BUS B : form the inputs to a common ALU
 ALU : OPR determine the arithmetic or logic
microoperation
 The result of the microoperation is available for

external data output and also goes into the inputs of


all registers
 3 X 8 Decoder: select the register (by SELD) that
receives the information from ALU
7
5.2. General Register Organization
• An operation is selected by the ALU operation selector (OPR).
• The result of a microoperation is directed to a destination register
selected by a decoder (SELD).
• Control word: The 14 binary selection inputs (3 bits for SELA, 3
for SELB, 3 for SELD, and 5 for OPR)

• These four control signals are generated in control unit in start of


each clock cycle ensuring operands are selected beside correct
ALU operation and result is chosen in one clock cycle only.

8
5.2. General Register Organization

a) Block Diagram b) Control Word


9
5.2. General Register Organization
• The control unit that operates the CPU bus system directs the
information flow through the registers and ALU by selecting the
various components in the system
• For example, to perform the operation
R1 R2 + R3
• The control unit must provide binary selection variables to
the following selector inputs:

Binary selector input


1. MUX A selector (SELA) : to place the content of R2 into BUS A
2. MUX B selector (SELB) : to place the content of R3 into BUS B
3. ALU operation selector (OPR) : to provide the arithmetic addition
R2 + R3
4. Decoder selector (SELD) : to transfer the content of the output
bus into R1
10
Encoding of Register Selection Fields:
»SELA or SELB = 000 (External Input) : MUX selects the external data
»SELD = 000 (None) : no destination register is selected but the
contents of the output bus are available in the external output
11
5.2. General Register Organization
Example:
1. Micro-operation: R1 R2 + R3
2. Control word
Field: SELA SELB SELD OPR
Symbol: R2 R3 R1 Add
Control word:010 011 001 00010

Q: let the content of R2=0001 and R3=0010, Please


specify which register is selected by the decoder and
what is the value going to be stored in the target
register using the above given instruction in the
control word.
Ans: R1, 0011

12
5.3 Stack Organization
 A useful feature that is included in the CPU of most
computers is a stack or last-in, first-out(LIFO) list
 Stack: A storage device that stores information in such a
manner that the item stored last is the first item
retrieved.
 The operation of a stack can be compared to a stack of
trays. The last tray placed on top of the stack is the first
to be taken off.

13
5.3 Stack Organization
 Stack pointer (SP): A register that holds the address of the
top item in the stack.
 SP always points at the top item in the stack
 There are exactly two operations that can be performed on a
stack.
 Push (Push-down): it is the result of pushing a new item on top

 Operation to insert an item into the stack.


 It adds a new item to the top of the stack
 Pop (Pop-up): it is the result of removing an item from the top
 Operation to retrieve an item from the stack.
 It removes an item from the top of the stack.

14
Register Stack
• A stack can be organized as a
collection of a finite number
of registers.
• In a 64-word stack, the stack
pointer contains 6 bits.
• The one-bit register
• FULL is set to 1 when the
stack is full;
• EMPTY register is 1 when the
stack is empty.
• The data register DR holds
the data to be written into or
read from the stack.
15
The following are the micro-operations
associated with the stack
• Three items are placed in the stack: A,B, and C in that order
• The push operation is implemented with the following
sequence of microoperation.

SP 0, EMPTY 1, FULL 0 (Initialization)


SP SP + 1 (increment the pointer )
M[SP] DR ( Write item on top of the stack )
If(SP = 0)then(FULL 1) (Check if the stack is full)
EMPTY 0 ( mark the stack not empty)

• Note that SP becomes 0 after 63

16
The following are the micro-operations
associated with the stack
• A new item is deleted from the stack if the stack is not
empty.
• The pop operation consists of the following sequence of
micro-operations.

DR M[SP] (Read item from the top of stack)


SP SP – 1 (decrement the stack pointer)
If(SP=0)then EMPTY 1)( check if the stack is empty)
FULL 0 ( mark the stack not full)

17
Reverse Polish Notation (RPN)
• A stack organization is very effective for evaluating
arithmetic expressions
• The common mathematical method of writing arithmetic
expressions imposes difficulties when evaluated by a
computer
• The common arithmetic expressions are written in infix
notation, with each operator written between the operands.
• Reverse polish notation : is a postfix notation (places
operators after operands)
• Example : Consider the following expression:
• A+B Infix notation
• +AB Prefix or polish notation
• AB+ Postfix or Reverse Polish notation

18
Reverse Polish Notation (RPN)
• The reverse Polish notation is in a form suitable for stack
manipulation.
• The following expressions are written in reverse polish notation
as:
• A * B + C * D  (AB *)+(CD *)  AB * CD * +
• ( 3 * 4 ) + ( 5 * 6 )  34 * 56 * +
Evaluation procedure:
1. Scan the expression from left to right.
2. When an operator is reached, perform the operation with the
two operands found on the left side of the operator.
3. Replace the two operands and the operator by the result
obtained from the operation.

19
Reverse Polish Notation (RPN)
• Example: consider the following expression
• Infix (3 * 4) + (5 * 6) = 42
• Reverse polish notation: 3 4 * 5 6 * +

stack evaluation:
• Get value
• If value is data: push data
• Else if value is operation: pop, pop evaluate and push.

n12 5 6 * +
12 30 +
42

20
Reverse Polish Notation (RPN)
• Reverse polish notation : 3 4 * 5 6 * +
• Consider the stack operation below:
• First the number 3 is pushed to the stack then the number 4
• The next symbol is the multiplication operator *
• This causes a multiplication of the two topmost items in the stack
• The stack is then popped and the product is placed on top of the stack,
replacing the two original operands.

21
5.4 Complex Instruction Set Computers: CISC
• A computer with large number of instructions is called
complex instruction set computer or CISC.
• Complex instruction set computer is mostly used in scientific
computing applications requiring lots of floating point
arithmetic.
CISC Characteristics
• A large number of instructions - typically from 100 to 250
instructions.
• Some instructions that perform specialized tasks and are used
infrequently.
• A large variety of addressing modes - typically 5 to 20
different modes.
• Variable-length instruction formats
• Instructions that manipulate operands in memory.
22
Reduced Instruction Set Computers: RISC
• A computer with few instructions and simple construction
is called reduced instruction set computer or RISC.
• RISC architecture is simple and efficient.
The major characteristics of RISC architecture are:
– Relatively few instructions
– Relatively few addressing modes
– Memory access limited to load and store instructions
– All operations are done within the registers of the CPU
– Fixed-length and easily-decoded instruction format.

23
Chapter 6: Memory Organization
Sub topics includes:
– Memory Hierarchy
– Main Memory
– External Memory
– Cache Memory

24
6. Memory Organization
 The memory unit is an essential component in any digital
computer since it is needed for storing programs and
data
 No one technology is optimal in satisfying the memory
requirements for a computer system.
 It exhibits widest range of type, technology, organization,
performance and cost.
 Not all accumulated information is needed by the CPU at
the same time
 Therefore, it is more economical to use low-cost storage
devices to serve as a backup for storing the information
that is not currently used by CPU
25
6. Memory Organization
 Some characteristics of Memory Systems
 Location: Refers to whether it is internal or external to
computer
 Internal: Directly accessible by CPU
 main memory, cache, registers
 External: Accessible by CPU through I/O module

 magnetic disks, tapes, optical disks

 Access method
 Sequential access: Access to records is made in a
specific linear sequence eg. Magnetic tape
 Random access: the storage locations can be accessed in
any order. e.g. RAM and ROM

26
6. Memory Organization
 Some characteristics of Memory Systems
 Performance:
 The average time required to reach a storage location in
memory and obtain its contents is called the access time
 Access time (latency) is the time between "requesting"
data and getting it
 The access time = seek time + transfer time
 Seek time: time required to position the read-write

head to a location
 Transfer time: time required to transfer data to or

from the device


 Transfer rate: rate at which data can be moved into/out
of a memory unit

27
6.1 Memory Hierarchy
 The design constraints on a computer’s memory can
be summed up by three questions:
− How much is the capacity? Storage capacity
− How fast? Speed
− How expensive? Cost
 A variety of technologies are used to implement
memory systems, and across this spectrum of
technologies, the following relationships hold:
− Faster access time, greater cost per bit
− Greater capacity, smaller cost per bit
− Greater capacity, slower access time

28
6.1 Memory Hierarchy
• The total memory of a computer system is organized as
the hierarchy of memories as shown below.

• Economic and performance are the basis for the hierarchy


for memory organization
29
6.1 Memory Hierarchy
 As one goes down the hierarchy, the following occurs:
• Cost per bit Decrease (cheaper)
• Increasing storage capacity
• Increasing access time (slower in speed)
• Decreasing frequency of access of the memory by the
processor
 The memory unit that directly communicate with CPU is
called the main memory
 Devices that provide backup storage are called auxiliary
memory
 The memory hierarchy system consists of all storage
devices employed in a computer system from the slow by
high-capacity auxiliary memory to a relatively faster
main memory, to an even smaller and faster cache
memory
30
6.1 Memory Hierarchy
 The main memory occupies a central position by being able to
communicate directly with the CPU and with auxiliary
memory devices through an I/O processor
 A special very-high-speed memory called cache is used to
increase the speed of processing by making current programs
and data available to the CPU at a rapid rate

31
6.1 Memory Hierarchy
 CPU logic is usually faster than main memory access
time, with the result that processing speed is limited
primarily by the speed of main memory
 The cache is used for storing segments of programs
currently being executed in the CPU and temporary
data frequently needed in the present calculations
 The typical access time ratio between cache and main
memory is about 1 to 7
 Auxiliary memory access time is usually 1000 times
that of main memory

32
6.2 Main Memory
 The memory unit that communicates directly with CPU is
called main memory/Primary memory.
 Most of the main memory in a general purpose
computer is made up of RAM integrated circuits chips,
but a portion of the memory may be constructed with
ROM chips

 RAM– Random Access memory or read/write memory


 Integrated RAM are available in two possible
operating modes, Static and Dynamic
 ROM– Read Only memory

33
6.2 Main Memory
Random-Access Memory (RAM)
 Static RAM (SRAM)
 Each cell stores bit with a six-transistor circuit.

 Retains value indefinitely, as long as it is kept powered.

 Relatively insensitive to disturbances such as electrical

noise.
 Faster and more expensive than DRAM and used for cache

memory
 Dynamic RAM (DRAM)
 Each cell stores bit with a capacitor and transistor.

 Loses its stored information in a very short time (a few

milliseconds) even though the power supply is on


 Value must be refreshed every 10-100 ms.
 Sensitive to disturbances.
 Slower and cheaper than SRAM and used for main memory
34
6.2 Main Memory
Read Only Memory (ROM)
 ROM is used for storing programs that are PERMENTLY
resident in the computer
• The ROM portion of main memory is needed for storing an
initial program called a bootstrap loader.
• The bootstrap loader((BIOS) is a program whose function
is to start the computer software operating when power is
turned on.
• Nonvolatile memory: The contents of ROM remain
unchanged after power is turned off.

35
6.3 Auxiliary Memory/External Memory
 Storage devices that provide backup storage are called
auxiliary/external memory/secondary memory.
• Used to store programs and data which are not needed
immediately by the CPU, large data files, backup data
• Not urgently needed data are stored here.
 It is non volatile
 Common used secondary memory includes: Magnetic disks,
Optical disks, Magnetic tapes, etc.

36
6.4 Cache memory
 A cache memory is a small capacity memory , very
fast memory that retains copies of recently used
information from main memory.
 Is a reserved section of main memory (RAM) that is
used to enhance performance of a CPU.
 If the active portions of the program and data are
placed in a fast small memory, the average memory
access time can be reduced,
 Thus reducing the total execution time of the program
 Such a fast small memory is referred to as cache
memory
 The cache is one of the fastest component in the
memory hierarchy and approaches the speed of CPU
component
37
6.4 Cache memory
 It is located between the main memory and the CPU
 When CPU needs to access memory, the cache is examined
 If the word is found in the cache, it is read from the fast
memory (cache memory)
 If the word addressed by the CPU is not found in the
cache, the main memory is accessed to read the word
 The basic characteristic of cache memory is its fast
access time

38

You might also like