0% found this document useful (0 votes)
7 views

DSCO UNIT4 modified

This document covers Input/Output (I/O) organization and memory systems, detailing how computers interact with various I/O devices through methods like memory-mapped I/O and I/O-mapped I/O. It explains the mechanisms for managing data transfer, including program-controlled I/O, interrupt-driven I/O, and Direct Memory Access (DMA). Additionally, it discusses the importance of interrupts in allowing efficient processor operation while handling I/O tasks.

Uploaded by

freebeast
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

DSCO UNIT4 modified

This document covers Input/Output (I/O) organization and memory systems, detailing how computers interact with various I/O devices through methods like memory-mapped I/O and I/O-mapped I/O. It explains the mechanisms for managing data transfer, including program-controlled I/O, interrupt-driven I/O, and Direct Memory Access (DMA). Additionally, it discusses the importance of interrupts in allowing efficient processor operation while handling I/O tasks.

Uploaded by

freebeast
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 74

Input / Output Organization

and
Memory

UNIT - 4

1 Dr Yogish H K, Professor, Dept. of ISE


Syllabus
2

◻ I/O Organization: Accessing I/O Devices, Interrupts


– Interrupt Hardware, Enabling and Disabling
Interrupts, Handling Multiple Devices, Direct Memory
Access: Bus Arbitration
◻ The memory System: Basic Concepts, Speed, size and
Cost of memory systems. Cache Memories – Mapping
Functions.
Dr Yogish H K, Professor, Dept. of ISE
Introduction
3

◻ Basic features of a computer is its ability to exchange


data with other devices.

◻ Example: Extensive use of computers to communicate


with other computers over Internet an access information
around the globe.
◻ In the second example the input to computer may come
from Sensor switch, digital camera. Microphone, or
alarm etc.,

Dr Yogish H K, Professor, Dept. of ISE


Introduction ..
4

Output of the computer


◻ may be a sound signal to be sent to a speaker, digitally
coded command to change the speed of a motor, open a
valve, cause a robot to move in a specified manner etc.,

◻ So, General purpose / special purpose computer should


have the ability to exchange / process information with
wide range of devices in varying environments.

Dr Yogish H K, Professor, Dept. of ISE


Accessing I/O devices
5

◻ A Single Bus Structure : Simple arrangement of


connecting I/O devices to a computer is by single bus.

Processor Memory

Bus

I/O device 1 I/O device n

◻ Multiple I/O devices may be connected to the processor and the


memory via a bus.
◻ Bus consists of three sets of lines to carry address, data and control
signals. Dr Yogish H K, Professor, Dept. of ISE
Accessing I/O devices
6

◻ Each I/O devices is assigned a unique set of


addresses.
◻ To access an I/O device, the processor places the
address on the address lines.
◻ The device recognizes the address, and responds
to the control signals.
◻ Then requested data is transferred over the data
lines.
Dr Yogish H K, Professor, Dept. of ISE
Accessing I/O devices
7

There are 2 ways to deal with I/O-devices: 1) Memory-mapped I/O & 2)


I/O-mapped I/O.
1) Memory-Mapped I/O
◻ In this approach Memory and I/O-devices share a common address-
space.
◻ Any data-transfer instruction (like Move, Load) can be used to
exchange information.
◻ For example,
? Move DATAIN, R0; This instruction sends the contents of location DATAIN to
register R0.
? Here, DATAIN- address of the input-buffer of the keyboard.
? Move R0, DATAOUT
? DATAOUT- address of the output-buffer of the Monitor.
Dr Yogish H K, Professor, Dept. of ISE
Accessing I/O devices
8

2) I/O-Mapped I/O
◻ Memory and I/O address-spaces are different.
◻ A special instructions named IN and OUT are used for
data-transfer.
◻ Advantage of separate I/O space: I/O-devices deal
with fewer address-lines.
1. Memory-Mapped I/O
What happens? I/O devices (like keyboards and monitors) share the same address space as memory.
How to use? You can use normal instructions like Move or Load to send/receive data to/from I/O devices, just like working with memory.
Example:
Move DATAIN, R0: Reads data from a keyboard (input buffer) to register R0.
Move R0, DATAOUT: Sends data from R0 to a monitor (output buffer).
2. I/O-Mapped I/O
What happens? I/O devices use a separate address space from memory.
How to use? Special instructions like IN and OUT are used to talk to I/O devices.
Advantage: I/O devices need fewer address lines, so the hardware is simpler.
Dr Yogish H K, Professor, Dept. of ISE
I/O Interface for an Input Device
9
An I/O Interface is like a middleman between an input device (like a keyboard) and the system's bus (communication pathway). Here's how it
works:

I/O Interface for an Input Device


• I/O device is connected to the bus using an I/O
interface circuit which has: Address decoder, control
circuit, and data and status registers.
Address lines
Bus Data lines
Control lines

Address Control Data and I/O


decoder circuits status registers interface

Input device

Dr Yogish H K, Professor, Dept. of ISE


I/O Interface for an Input Device
10

• Address decoder decodes the address placed on the address lines


thus enabling the device to recognize its address.
◻ Data register holds the data being transferred to or from the
processor.
There are 2 types:
i) DATAIN Input-buffer associated with keyboard.
ii) DATAOUT Output data buffer of a display/printer.
• Status register contains information relevant to operation of I/O-
device.
• Data and status registers are connected to the data lines, and have
unique addresses.
• Finally I/O interface circuit coordinates I/OH K,transfers.
Dr Yogish Professor, Dept. of ISE
MECHANISMS USED FOR INTERFACING
I/O-DEVICES
11
mechanisms used to manage communication and data transfer between I/O devices and the processor i

1) Program Controlled I/O


2) Interrupt I/O
PID
3) Direct Memory Access (DMA)

1) Program Controlled I/O


• Processor repeatedly checks status-flag to achieve required
synchronization b/w processor & I/O device. (We say that the
processor polls the device).
• Main drawback:
The processor wastes time in checking status of device before
actual data-transfer takes place.
Program Controlled I/O:

The processor actively checks (or "polls") if the I/O device is ready for data transfer.
Example: Continuously checking if a printer is ready to print. Dr Yogish H K, Professor, Dept. of ISE
Problem: Inefficient, as the processor wastes time waiting.
MECHANISMS USED FOR INTERFACING
I/O-DEVICES..
12

2) Interrupt I/O
• I/O-device initiates the action instead of the processor.
• I/O-device sends an INTR signal over bus whenever it is ready
for a data-transfer operation.
• Like this, required synchronization is done between processor
Interrupt I/O:
& I/O device. The I/O device sends an interrupt signal to the processor when it’s ready.
Example: A keyboard sends an interrupt when a key is pressed.
3) Direct Memory Access (DMA) Advantage: Processor doesn’t waste time polling; it responds only when needed.

• Device-interface transfer data directly to/from the memory with


out continuous involvement by the processor.
• DMA is a technique used for high speed I/O-device.
Direct Memory Access (DMA):
A special hardware unit called the DMA controller takes over the data transfer task.
A DMA controller handles data transfer between memory and the I/O device, bypassing the processor.
Example: Transferring large files to a hard disk.
Advantage: High speed and minimal processor involvement.

Dr Yogish H K, Professor, Dept. of ISE


Interrupts
13

 In program-controlled I/O, when the processor


continuously monitors the status of the device, it does
not perform any useful tasks.
 An alternate approach would be for the I/O device to
alert the processor when it becomes ready.
 Do so by sending a hardware signal called an interrupt to
the processor.
 At least one of the bus control lines, called an interrupt-
request line is dedicated for this purpose.
 Processor can perform other useful tasks while it is
waiting for the device to be ready.
Dr Yogish H K, Professor, Dept. of ISE
Interrupts .
14

 Consider a task that requires computations to be


performed and results to be printed on a line printer.
 Assume Line printer prints one line of information at a
time.
 While printing of one line is going on no other activity
with respect to Computation can be done.
 Assuming two dependent routines like main and
subroutine:
a) compute and
b) Print and they being executed alternately.

Dr Yogish H K, Professor, Dept. of ISE


Interrupts ..
15

• While printing is taking place, computer continues


to execute Compute.
• When printer is ready to take next line, Print routine
is Called and data is moved into printer buffer and
control is passed back to Compute program like a
subroutine return for the compute program to
continue.
1.The Compute Task is running on the CPU.
2. The Printer takes time to print one line. While
this happens, the CPU would normally sit idle,
wasting time.
3. Instead of waiting, the CPU continues
computing other things while the printer is busy.
4. When the printer is ready for the next line, it
sends an interrupt signal to the CPU.
5.The CPU temporarily pauses the Compute
Task, runs the Print Task to send data to the
printer, and then returns to Compute Task after
finishing.

Dr Yogish H K, Professor, Dept. of ISE


Routine
General term for any callable code block.

Interrupts ... Subroutine


A specific type of routine called by another part of a program.

16

• The routine executed in response to an interrupt


request is called Interrupt service routine (ISR).
• In the previous example Print routine like is
interrupt service routine.

Dr Yogish H K, Professor, Dept. of ISE


Interrupts ….
17

 Processor is executing the instruction located at address i when


an interrupt occurs.
 Routine executed in response to an interrupt request is called
the interrupt-service routine.
 When an interrupt occurs, control must be transferred to the
interrupt service routine.
 But before transferring control, the current contents of the PC
(i+1), must be saved in a known
 location.
 This will enable the return-from-interrupt instruction to resume
execution at i+1.
 Return address, or the contents of the PC are usually stored on
the processor stack.
Dr Yogish H K, Professor, Dept. of ISE
Interrupt Hardware
18

interrupt request line

interrupt request

Fig: An equivalent circuit for an open-drain bus used to


implement a common interrupt-request line

Dr Yogish H K, Professor, Dept. of ISE


Interrupt Hardware .
19

I/O Device requests an interrupt by activating a bus line


called interrupt-request.
Computers may have several I/O devices that can request
an interrupt.
A single interrupt line may be used to serve n devices.
All devices are connected to the line (Control Line) via
switches to the ground.

To request an interrupt, a device closes its associated


switch.
Dr Yogish H K, Professor, Dept. of ISE
Interrupt Hardware ..
20

If all Interrupt signals INTR1 to INTRn are inactive, that is if all


switches are open, the voltage on the interrupt-request line
will be equal to Vdd.
This is inactive state of line ( no interrupt request is pending)
When a device requests an interrupt by closing its switch, the
voltage on the line drops to 0, causing the interrupt-request
signal, INTR, received by the processor to go to 1.
Closing of one or more switches will cause the line voltage to
drop to 0, the value of INTR is the logical OR of the requests
from individual devices.

Dr Yogish H K, Professor, Dept. of ISE


Interrupt Hardware ...
21

In the electronic implementation of circuit, special gates


known as open-collector (for bipolar circuits) or Open-
drain (for MOS circuits) are used to drive INTR Line.

The output of an open –collector or open-drain gate is


equivalent to a switch to ground that is open when
gates input is in the 0 state and closed when it is in the
1 state.

Dr Yogish H K, Professor, Dept. of ISE


Interrupt Hardware ….
22

The voltage level, hence the logic state, at the output of


the gate is determined by the data applied to all the
gates connected to the bus, according to the equation

INTR = INTR1 + INTR2 + . . . . . + INTRn

Resistor R is called a pull-up resistor because it pulls the


line voltage to the high-voltage state when the switches
are open.

Dr Yogish H K, Professor, Dept. of ISE


Enabling and Disabling Interrupts
23

The interrupt facility provided in the computer must give


programmer complete control over the events that takes
place during program execution.
The arrival of Interrupt from an external device causes
processor to suspend execution of one program and
start the execution of another.
Because interrupts may arrive at any time, interrupts may
alter the sequence of events from that envisaged by the
programmer.
Hence, interruption of program execution must be carefully
controlled.
Such facility is enabling and disabling of interrupts as
desired.
Interrupts Can Happen at Any Time: When an interrupt signal (like from a device) occurs, it forces the processor to stop
executing its current program and start handling the interrupt.

Problem with Repeated Interrupts: If the interrupt-request signal remains active and triggers multiple interruptions while the
processor is handling an interrupt, it could lead to an infinite loop where the system keeps handling interrupts without ever
returning to the main program. This would make the system stuck. Dr Yogish H K, Professor, Dept. of ISE
Enabling and Disabling Interrupts.
24

When a device activates the interrupt-request signal, it


keeps this signal activated until it learns that processor
has accepted its request.

Interrupt-request signal will be active during the execution


of interrupt-service routine, until an instruction is
reached that accesses the device in question.

It is essential to ensure that this active request signal does


not lead to successive interruptions, causing the system
to enter an infinite loop from which it can not recover.
The printer might send another interrupt request while the first interrupt is still being processed, which could cause the Compute program to be suspended
multiple times before it can actually complete.
Dr Yogish H K, Professor, Dept. of ISE
ISR (Interrupt Service Routine) is a special block of code that the processor executes in response to an interrupt.

Enabling and Disabling Interrupts..


25
three options for enabling and disabling interrupts in a simple way

First option is processor hardware to ignore (do not disable) interrupt-


request line until the execution of 1st instruction in interrupt service
routine.

After entering interrupt service routine,1st executable instruction in


interrupt service routine may be execution Disable Interrupt
instruction.
By DI, programmer is ensured that no repeated interruptions will not
occur until Enable interrupt instruction is executed.
Enable Interrupt instruction is executed as the last instruction in
Interrupt Service Routine before the execution of Return-from-
Interrupt instruction.
When an interrupt occurs, the processor doesn't immediately disable interrupts. It first enters the Interrupt Service Routine (ISR).
The first instruction inside the ISR disables interrupts using a command like Disable Interrupt (DI).
This ensures that no new interrupts will occur while the ISR is running.
The last instruction in the ISR enables interrupts again before returning control to the interrupted program (using Enable Interrupt).
Dr Yogish H K, Professor, Dept. of ISE
Enabling and Disabling Interrupts...
26

Second option is suitable for a simple processor with only one


interrupt request line, is to have the processor automatically
disable interrupts before starting the execution of interrupt
service routine.
After saving PC and PSR (Processor Status Register) on the
stack, processor performs the equivalent of Interrupt-Disable
instruction. Interrupt Enable bit is often one bit in PSR.
When a Return-From-Interrupt instruction is executed at the end
of ISR, the contents of PSR and PC are restored from the
stack and interrupt is enabled by setting the IE bit in PSR.
In simple processors, when an interrupt occurs, the processor automatically disables interrupts before starting the ISR.
The processor saves the Program Counter (PC) and Processor Status Register (PSR) to the stack and then disables interrupts.
At the end of the ISR, the processor restores the PC and PSR, and interrupts are re-enabled.

Dr Yogish H K, Professor, Dept. of ISE


Enabling and Disabling Interrupts....
27

Third option is, processor has a special interrupt-request signal


line for which interrupt-handling circuit responds to the
leading edge of the signal.

Such line is said to be edge-triggered.

In this case, processor will receive only one request regardless of


how long the line is activated.
Edge-Triggered Interrupts
How it works:
The processor only responds to the first "edge" (change in signal) of the interrupt request line.
Once the interrupt request line becomes active, the processor processes the interrupt.
No new interrupts are processed while the line remains active, preventing multiple interrupts for the same event.
Why it's needed:
This prevents repeated interrupts when the request signal stays active, which can happen in the first two methods.

Dr Yogish H K, Professor, Dept. of ISE


Enabling and Disabling Interrupts.....
Summary – Assumed that interrupts are initially enabled
28

1. The device raises an interrupt request


2. The program currently being executed is interrupted
(temporarily suspended) by the processor.
3. All interrupts are disabled
4. The device is informed that its request has been granted, and in
response, the device deactivates the interrupt request signal.
5. PSR and return address are saved on stack
6. The action requested by the interrupt is performed by the ISR
7. Interrupts are enabled again and the execution of the interrupted
program is resumed.

Dr Yogish H K, Professor, Dept. of ISE


Handling Multiple devices
29
When a processor is connected to multiple devices that can initiate interrupts, there are several challenges that arise due
to the uncertainty of which device will generate an interrupt and when.
In case of processor being connected to multiple devices
capable of initiating interrupts, there is no definite order
in which they will generate interrupts.

Because these devices are optionally independent a


device may request an interrupt while another is being
serviced or several devices may request interrupts
exactly at the same time.

Dr Yogish H K, Professor, Dept. of ISE


Handling Multiple devices .
30

◻ This gives rise to number of questions to be answered.


1. How can the processor recognize the device requesting
an interrupt?
2. Given that different devices are likely to require
different Interrupt-Service-Routines, how can the
processor obtain/know the starting address of the
appropriate routine in each case?
3. Should a device be allowed to interrupt the processor
while another interrupt is being serviced?
4. How two or more simultaneous interrupt request be
handled.
Dr Yogish H K, Professor, Dept. of ISE
Handling Multiple devices ...
31

The information needed to determine the device requesting


METHOD :1
interrupt is available in Status register through the bit IRQ for
each device.
Polling Interrupts:
One way is to pole the IRQ bits and determine the device and
How it works: The processor checks the interrupt request (IRQ) bits in the status
service
register for the interrupt.
each device, one by one, to see which device is requesting an interrupt.
If multiple devices interrupts, pole one after the other the IRQs
Advantages: Simple to implement.
and service requests by executing appropriate Service
routineTime-consuming,
Disadvantages: for that device.as the processor has to check each device's IRQ
status sequentially,
Polling scheme leading to inefficiency,
is easy especially
to implement, butwhen
mainmany devices are involved.
disadvantage is
time spent interrogating the IRQ bits of all devices in an
order.
Alternatively, most common and efficient is vectored interrupts.

Dr Yogish H K, Professor, Dept. of ISE


Handling Multiple devices – Vectored
32
Interrupts
INTERRUPT VECTOR

 A device requesting interrupt can identify itself by


sending a special code to the processor over the bus.
 This enables processor to identify individual devices
This vector
(usually 4
to 8 bits) even if they share a single Interrupt-Request line.
helps the
processor
identify
 Code length is typically in the range of 4 to 8 bits. This
which
device
code identifies the vector address (memory location) of
sent the
interrupt,
that device wherein the processor/Interrupt handler can
even if
multiple
find address of Interrupt Service Routine of that device.
devices
are
 The processor reads the address of the ISR from the
connected
to a single
Interrupt vector of the interrupted device and loads into
interrupt
request
PC for ISR execution to start.
line.
 The Interrupt vector may also include a new value for
the Processor status Register (PSW)
The interrupt vector corresponds to the memory address of the Interrupt Service Routine (ISR) for that particular device.
The processor uses this vector to look up the appropriate ISR address and starts executing that routine.
Along with the ISR address, the interrupt vector might also contain a value for the Processor Status Word (PSW), which could modify the processor's
state (like interrupt enable/disable settings).
Dr Yogish H K, Professor, Dept. of ISE
Handling Multiple devices – Vectored
33
Interrupts .
In Most computers, I/O devices send the interrupt-vector code
over the data bus using the bus control signals to ensure that
devices do not interfere with each other.
When a device interrupts processor may not be ready to
honor/take the request and vector code immediately.
1. First processor has to complete the current instruction under
execution.
2. Interrupt might be disabled at the time the request is raised.
The interrupting device must wait to put data on the bus until
the processor is ready to receive it.
When the processor is ready to receive the interrupt-vector
code, it activates Interrupt-Acknowledge line, INTA.
The I/O device responds by sending its interrupt-vector code
and turning off the INTR signal.
Interrupt Acknowledgement:
Before the processor can accept and respond to the interrupt, it must finish the current instruction it is executing.
The processor may also disable interrupts temporarily, meaning the interrupting device has to wait.
Once the processor is ready, it activates the Interrupt-Acknowledge line (INTA), signaling the device to send its interrupt vector code.
The device then sends its code and turns off the interrupt request signal (INTR). Dr Yogish H K, Professor, Dept. of ISE
Handling Multiple devices – Vectored
34
Interrupts – Interrupt nesting
Interrupts may be disabled during the execution of
Interrupt service routine so that repeated interrupts from
the same device do not occur continuously.
The same may be used for when several devices are
involved.
This ensures that once the execution of ISR is started, it
always continues to completion before processor
accepts an interrupt request from another device or
same device.
Interrupt service routine are/shall be short, and the delay
they may cause is acceptable for all simple devices.

Dr Yogish H K, Professor, Dept. of ISE


Handling Multiple devices – Vectored
35
Interrupts – Interrupt nesting ..
However, for some devices a long (relative) delay in responding
to Interrupt request may lead to erroneous results.
Example :
Real-time clock or System Clock:
Real-Time Clock Interrupt
This device Example (Simplified):
sends interrupt requests at regular intervals, for
Real-time clock (RTC) sends interrupt requests at regular intervals (e.g., every
second).each of which processor executes a small program (ISR) to
update
Each time counters
the RTC sends an ininterrupt,
the memory to keepruns
the processor track of second,
a small program (ISR) to
updateminutes,
the time inand
memory (seconds, minutes, etc.).
so on.
For the system to work properly, the processor must respond quickly to the RTC
A proper
interrupt operation
so that requires
it doesn't miss that the delay in responding to an
any requests.
If the processor
interrupttakes too long
request fromto respond to one
real-time interrupt,
clock it might
be small miss the next
in comparison
one, causing timekeeping errors.
with the interval between two successive requests of Real-
time /System Clock requests.

Dr Yogish H K, Professor, Dept. of ISE


Handling Multiple devices – Vectored
Interrupts – Interrupt nesting – device priority
36

◻ To ensure the requirement of responding to some requests


immediately (relative qualification), it is necessary to accept
an interrupt request during the execution of an Interrupt-
Service routine for another device.
◻ This leads to the requirement that I/O devices shall be
organized in a priority structure.
◻ This brings the situation or a rule to honor interrupt requests
from higher-priority devices while processor is servicing
another request from lower priority device than the
interrupting device.

Dr Yogish H K, Professor, Dept. of ISE


Handling Multiple devices – Vectored Interrupts
– Interrupt nesting – Processor priority
37

◻ Multiple level priority organization means that during the


execution of an Interrupt service routine, interrupt requests
will be accepted from some devices based on their priority
and the current priority.
◻ To implement this scheme, there shall be a provision to assign
a priority level to the processor that can be changed under
program control.
◻ The priority level of the processor is the priority level of the
program that is currently being executed.
◻ Finally it can be stated that the processor accepts interrupts
only from the devices that have priorities higher than its own.

Dr Yogish H K, Professor, Dept. of ISE


Handling Multiple devices – Vectored Interrupts –
Interrupt nesting – Processor priority in PS
38

◻ At the time of execution of an ISR for some device is started,


the priority of the processor is raised that device so that,
? This action disables (logically), interrupts from devices at the same
level of priority or lower.
? However interrupt requests from higher priority devices will
continue to be accepted.
◻ Processor priority is usually encoded in a few bits of the
processor status word. The processor's priority is stored in the Processor Status Word (PSW).
? The processor priority can be changed by program instructions that
write into the PS. The processor's priority can be changed by specific program instructions that modify the
PSW.
? The instructions which can change processor priority are called
privileged instructions, which can be executed when processor is
running in supervisor mode.

Dr Yogish H K, Professor, Dept. of ISE


Handling Multiple devices – Vectored Interrupts –
Interrupt nesting – Processor priority in PS ..
39

◻ At the time of execution of an ISR for some device is started,


the priority of the processor is raised that device so that,

? This action disables (logically), interrupts from devices


at the same level of priority or lower.

? However interrupt requests from higher priority devices


will continue to be accepted.

Dr Yogish H K, Professor, Dept. of ISE


Handling Multiple devices – Vectored Interrupts – Interrupt nesting –
Processor priority in PS – Privileged Instructions –Supervisor Mode.
40

◻ Processor priority is usually encoded in a few bits of the


processor status word.

? The processor priority can be changed by program


instructions that write into the PS.

? The instructions which can change processor priority are


called privileged instructions, which can be executed
when processor is running in supervisor mode.

Dr Yogish H K, Professor, Dept. of ISE


Handling Multiple devices – Vectored Interrupts – Interrupt nesting –
Processor priority in PS – Privileged Instructions –Supervisor Mode…
41

◻ Processor will be in supervisor’s mode only when executing


Operating System Routines.
◻ It switches to user mode before beginning to execute
application programs.
◻ This ensures that, a user program cannot accidentally or
intentionally, change the priority of the processor and disrupt
the system operation.
◻ An attempt to execute a privileged instruction in while in user
mode leads to a special type interrupt called privilege
exception.

Dr Yogish H K, Professor, Dept. of ISE


Handling Multiple devices – Vectored Interrupts – Interrupt nesting –
Processor priority – Priority arbitration
42

◻ Multiple priority scheme can be implemented using separate


interrupt-request (IR) and interrupt-acknowledge (IA) lines
for each device.
◻ Each of the IR lines is assigned s different priority level.
◻ Interrupt requests received over these lines are sent to a
priority arbitration circuit in the processor.
◻ A request is accepted only if it has higher priority level than
that currently assigned to the processor.

Dr Yogish H K, Professor, Dept. of ISE


Implementation of interrupt priority using individual
interrupt request and acknowledge lines.
43

Arbitration refers to the process of determining the order or priority in which multiple competing requests
are granted access to a shared resource, such as a processor or communication bus.
interrupt request lines

INTR1 I NTRp
Processor

Device 1 Device 2 Devicep

INTA1 INTA p

interrupt acknowledgement lines


Priority arbitration

### Multiple Priority Scheme Explained:

- **Separate Interrupt Lines:**


- Each device is assigned its own **Interrupt-Request (IR)** and **Interrupt-Acknowledge (IA)** lines.
- Each **IR line** is assigned a **different priority level**.

- **Priority Arbitration:**
- When a device sends an interrupt request, it goes through a **priority arbitration circuit** in the processor.
- The processor will **only accept an interrupt request** if the interrupt has a **higher priority** than the current priority level of the processor.

- **Handling Requests:**
- This scheme ensures that **higher-priority interrupts** are processed first, even if a lower-priority interrupt is already being serviced.

Dr Yogish H K, Professor, Dept. of ISE


Handling Multiple devices – Vectored Interrupts – Interrupt
nesting – Processor priority – Simultaneous requests
44


which When simultaneous requests are to be handled
- If multiple devices request an interrupt at the same time, the processor needs a way to prioritize and select
request to service first.
from two or more devices of the same type,
### . **Shared Interrupt Request Line:**
processor
- **Polling:** Devices sharemust have
a single interrupt means
request of deciding
line. The processor which
checks the status of each device in
sequence. The order of polling determines which device's request will be serviced first. This means the first
request
device checked to service first.
will be prioritized.
- **Priority Handling:** You could assign priorities to devices based on the order in which the processor polls
In this case several devices share one interrupt-
them. The higher-priority device will be serviced first.

- In therequest line.
### **Vector Interrupts:**
case of **vectored interrupts**, each device sends an interrupt vector to the processor, which tells the
processor the address of the interrupt service routine (ISR) for that device.
◻ Polling
- **Ensuring statusIt’sregisters
Single Selection:** important to ensureofthatsuch devices
only one device sends its is simple,
interrupt vector at a
time, preventing conflicts. This is typically handled by the processor’s interrupt arbitration system, which ensures
only onepriority isvector
device's interrupt determined
is acknowledged, evenbyif multiple
the order ofrequests
devices make polling.
simultaneously.

This way,
◻ When
the system vector interrupts
ensures that interrupt
send requests at the same time.
are
handling is orderly used,
and preventsit must
conflicts whenbemultiple devices

ensured that only one device is selected to send


its interrupt vector.
Polling: Devices share a single interrupt request line. The processor checks the status of each device in sequence. The order of
polling determines which device's request will be serviced first. This means the first device checked will be prioritized.

Dr Yogish H K, Professor, Dept. of ISE


Handling Multiple devices – Vectored Interrupts – Interrupt
nesting – Processor priority – Simultaneous requests ..
45

◻ A widely used mechanism of connecting devices


of the same type or sharing one Interrupt-request
line is to connect them in daisy chain.
◻ Interrupt request line (INTR) is common to all
the devices.
◻ Interrupt acknowledge line (INTA) is connected
in a daisy-chain fashion, such that the INTA
signal propagates serially through the devices.

Dr Yogish H K, Professor, Dept. of ISE


Handling Multiple devices – Vectored Interrupts – Interrupt
nesting – Processor priority – Simultaneous requests …
46

◻ When several devices raise an Interrupt-request and


the INTR line is activated.
◻ The processor sends INTA line to device 1.
◻ The signal (INTA) received by device 1 is passed to
next device in the chain only if it does not require
service.
◻ If Device 1’s request is pending, it blocks INTA signal
and proceeds to its identification by putting its code on
the data bus.
◻ In this daisy chain arrangement the device which is
closer to the processor has highest priority.

Dr Yogish H K, Professor, Dept. of ISE


Handling Multiple devices – Interrupt Priority Schemes –
Daisy Chain
47

• Devices are connected to form a daisy chain.


• Devices share the interrupt-request line, and interrupt-acknowledge
line is connected to form a daisy chain.
• When devices raise an interrupt request, the interrupt-request line is
activated.
• The processor in response activates interrupt-acknowledge.
• Received by device 1, if device 1 does not need service, it passes the
signal to device 2.
• Device that is electrically closest to the processor has the highest
priority.

Dr Yogish H K, Professor, Dept. of ISE


Interrupt Priority Schemes – Arrangement
of priority Groups
48

•When I/O devices were organized into a priority structure, each device had its
own interrupt-request and interrupt-acknowledge line.
•When I/O devices were organized in a daisy chain fashion, the devices shared
an interrupt-request line, and the interrupt-acknowledge propagated through
the devices.
•A combination of priority structure and daisy chain scheme can also used.

Dr Yogish H K, Professor, Dept. of ISE


Interrupt Priority Schemes – Arrangement
of priority Groups…
49

•Devices are organized into groups.


•Each group is assigned a different priority level.
•All the devices within a single group share an interrupt-request line, and are
connected to form a daisy chain.

Dr Yogish H K, Professor, Dept. of ISE


Direct Memory Access
50

 A special control unit used to transfer a block of data directly between an


I/O device and the main memory, without continuous intervention by the
processor.
 Control unit which performs these transfers is a part of
the I/O device’s interface circuit. This control unit is
called as a DMA controller.
 DMA controller performs functions that would be
normally carried out by the processor:
 For each word, it provides the memory address and all the control signals.
 To transfer a block of data, it increments the memory addresses and keeps
track of the number of transfers.

Dr Yogish H K, Professor, Dept. of ISE


Direct Memory Access..
51

 DMA controller can transfer a block of data from an


external device to the processor, without any intervention
from the processor.
 However, the operation of the DMA controller must be under the control of a
program executed by the processor. That is, the processor must initiate the
DMA transfer.

 To initiate the DMA transfer, the processor informs the


DMA controller of:
 Starting address,
 Number of words in the block.
 Direction of transfer (I/O device to the memory, or memory to the I/O device).

 Once the DMA controller completes the DMA transfer, it


informs the processor by raising an interrupt signal.
Dr Yogish H K, Professor, Dept. of ISE
Direct Memory Access.. Use of DMA Controllers in a Computer
System

52

•DMA controller connects a high-speed network to the computer bus.


•Disk?DMA controller, which controls two disks , it provides two
DMA channels.
•It can perform two independent DMA operations, as if each disk has its own DMA
controller. The registers to store the memory address, word count and status and
control information are duplicated.
Dr Yogish H K, Professor, Dept. of ISE
Direct Memory Access.. Method of DMA transfer
53
There are two primary methods of DMA (Direct Memory Access) transfer:

Memory access by processor and DMA controller are


Here'sinterwoven.
an easier explanation:

### 1. **Cycle Stealing**:


Requests coming
- The **DMA controller** fromcontrol
gets temporary DMA of the controller for amounts
memory to transfer small usingof data. the are given
- It **"steals"** a little time from the **processor** each time it needs to transfer data.
higher
- The processor priority
works most of than processor
the time, but the DMA controllerrequests.
pauses it for short moments to move data between memory
and devices (like a disk or network).
Among DMA devices Top priority may be given for high speed
- **Example**: When the processor is busy with something, the DMA controller might quickly transfer a small piece of data,
then the processor goes back to work.

### devices
2. **Block or Burst such
Mode**:as Disk, high-speed network Interface,
- The **DMA controller** gets **full control** of the memory and transfers a **large block** of data all at once without any
Graphic display device etc.,
interruptions.
- The processor has to wait until the DMA controller finishes its task.
Since Processor
- **Example**: originates
Imagine transferring most
a big file from memory
without letting the processor do anything until it's done.
to a of
disk.the
The DMAmemory access
controller will transfer the whole file at once

cycles, DMA controller can be said to “steal” memory


In short:
cycles Mode** isfrom processor.
- **Cycle Stealing** is like taking turns between the DMA controller and the processor.
- **Block like giving the DMA controller full control for a while to transfer a lot of data.

So This interweaving technique is called Cycle Stealing.


Dr Yogish H K, Professor, Dept. of ISE
Direct Memory Access.. Method of DMA transfer
54

The DMA controller may be given exclusive access to the


main memory to transfer a block of data without
interruption. This is known as Block or burst mode.
Most DMA controllers incorporate data storage buffer like
network controller.
DMA controller reads a block of data from the main memory
and stores it in its input buffer.
This transfer takes place using burst mode at a speed
appropriate to the memory and the computer bus. Then
the data in the buffer is transmitted over the network at the
network speed.

Dr Yogish H K, Professor, Dept. of ISE


Direct Memory Access --- BUS Arbitration
55

There may be possibility of conflict between processor and


DMA controller or between two DMA controllers in the
usage of bus at the same time to access main memory.

To resolve these conflicts, an arbitration procedure is


implemented on the bus to coordinate the activities of all
devices requesting memory transfers.

Dr Yogish H K, Professor, Dept. of ISE


Direct Memory Access --- BUS Arbitration …
56

BUS Master:
A device that is allowed to initiate the data transfer on the bus
at given time is called bus master.
Bus arbitration:
The process by which the next device is to become bus
master is selected and bus mastership is transferred.
Priority:
The selection of bus master shall take into account the needs
of various devices by establishing priority system for
gaining access to the bus.

Dr Yogish H K, Professor, Dept. of ISE


Direct Memory Access --- BUS Arbitration …
57

BUS Master:
A device that is allowed to initiate the data transfer on the bus
at given time is called bus master.
Bus arbitration:
The process by which the next device is to become bus
master is selected and bus mastership is transferred.
Priority:
The selection of bus master shall take into account the needs
of various devices by establishing priority system for
gaining access to the bus.

Dr Yogish H K, Professor, Dept. of ISE


Direct Memory Access --- BUS Arbitration … Methods
58

Bus Arbitration methods:


Centralized - A single bus arbiter performs the
required arbitration.
Distributed - All devices participate in the selection of
bus master.

Dr Yogish H K, Professor, Dept. of ISE


Direct Memory Access --- BUS Arbitration … Centralized
arbitration.

59
bus busy line

bus request

bus granted

• A single bus-arbiter performs the required arbitration (Figure: 4.20).


• Normally, processor is the bus-master.
• Processor may grant bus-mastership to one of the DMA controllers.
• A DMA controller indicates that it needs to become bus-master by activating BR
line.
• The signal on the BR line is the logical OR of bus-requests from all devices
connected to it.
• Then, processor activates BG1 signal indicating to DMA controllers to use bus
when it becomes free.
• BG1 signal is connected to all DMA controllers usingHaK,daisy-chain
Dr Yogish Professor, Dept. of ISE
arrangement.
Direct Memory Access --- BUS Arbitration … Centralized
arbitration.

60

If DMA controller-1 is requesting the bus,


Then, DMA controller-1 blocks propagation of grant-signal to other devices.
Otherwise, DMA controller-1 passes the grant downstream by asserting BG2.
• Current bus-master indicates to all devices that it is using bus by activating BBSY line.
• The bus-arbiter is used to coordinate the activities of all devices requesting memory
transfers.
• Arbiter ensures that only 1 request is granted at any given time according to a priority
scheme.
(BRBus-Request, BGBus-Grant, BBSYBus Busy).

Dr Yogish H K, Professor, Dept. of ISE


Direct Memory Access --- BUS Arbitration … Centralized
arbitration.

61

The timing diagram shows the sequence of events for the devices connected to the
processor.
• DMA controller-2
→ requests and acquires bus-mastership and
→ later releases the bus. (Figure: 4.21).
• After DMA controller-2 releases the bus, the processor resources bus-mastership.

Dr Yogish H K, Professor, Dept. of ISE


Direct Memory Access --- BUS Arbitration … DISTRIBUTED
ARBITRATION

62

All device participate in the selection of next bus-master (Figure 4.22).


• Each device on bus is assigned a 4-bit identification number (ID).
• When 1 or more devices request bus, they
→ assert Start-Arbitration signal &
→ place their 4-bit ID numbers on four open-collector lines ARB 0 through ARB 3 .

Dr Yogish H K, Professor, Dept. of ISE


Direct Memory Access --- BUS Arbitration … DISTRIBUTED
ARBITRATION

63

A winner is selected as a result of interaction among signals transmitted over these lines.
• Net-outcome is that the code on 4 lines represents request that has the highest ID number.
• Advantage:
This approach offers higher reliability since operation of bus is not dependent on any single
device.
For example:
Assume 2 devices A & B have their ID 5 (0101), 6 (0110) and their
code is 0111.
Each device compares the pattern on the arbitration line to its own ID
starting from MSB.
If the device detects a difference at any bit position, it disables the
drivers at that bit position.
Driver is disabled by placing ”0” at the input of the driver.
In e.g. “A” detects a difference in line ARB1, hence it disables the
drivers on lines ARB1 & ARB0.
This causes pattern on arbitration-line to change to 0110. This means
that “B” has won
contention. Dr Yogish H K, Professor, Dept. of ISE
 A big challenge in the design of a computer system
is to provide a sufficiently large memory, with a
reasonable speed at an affordable cost.
 Static RAM:
 Very fast, but expensive, because a basic SRAM cell has a complex circuit making it
impossible to pack a large number of cells onto a single chip.
 Dynamic RAM:
 Simpler basic cell circuit, hence are much less expensive, but significantly slower than
SRAMs.
 Magnetic disks:
 Storage provided by DRAMs is higher than SRAMs, but is still less than what is
necessary.
 Secondary storage such as magnetic disks provide a large amount
of storage, but is much slower than DRAMs.
Processor •Fastest access is to the data held in
processor registers. Registers are at
Registers the top of the memory hierarchy.
Increasing Increasing Increasing
•Relatively small amount of memory that
size speed cost per bit can be implemented on the processor
Primary L1
cache chip. This is processor cache.
•Two levels of cache. Level 1 (L1) cache
is on the processor chip. Level 2 (L2)
cache is in between main memory and
SecondaryL2 processor.
cache
•Next level is main memory, implemented
as SIMMs. Much larger, but much slower
than cache memory.
Main •Next level is magnetic disks. Huge amount
memory of inexepensive storage.
•Speed of memory access is critical, the
idea is to bring instructions and data
Magnetic disk
that will be used in the near future as
secondary close to the processor as possible.
memory
 Processor is much faster than the main memory.
 As a result, the processor has to spend much of its time waiting while instructions
and data are being fetched from the main memory.
 Major obstacle towards achieving good performance.
 Speed of the main memory cannot be increased
beyond a certain point.
 Cache memory is an architectural arrangement
which makes the main memory appear faster to
the processor than it really is.
 Cache memory is based on the property of
computer programs known as “locality of
reference”.
Main
Processor Cache memory

• Processor issues a Read request, a block of words is transferred from the


main memory to the cache, one word at a time.
• Subsequent references to the data in this block of words are found in the
cache.
• At any given time, only some blocks in the main memory are held in the
cache. Which blocks in the main memory are in the cache is determined by
a “mapping function”.
• When the cache is full, and a block of words needs to be transferred
from the main memory, some block of words in the cache must be
replaced. This is determined by a “replacement algorithm”.
• Existence of a cache is transparent to the processor. The processor issues
Read and
Write requests in the same manner.
• If the data is in the cache it is called a Read or Write hit.
• Read hit:
 The data is obtained from the cache. Occurs when the requested data is found in the
cache.
• Write hit:
 Cache has a replica of the contents of the main memory.
 Contents of the cache and the main memory may be updated simultaneously.
This is the write-through protocol.
 Update the contents of the cache, and mark it as updated by setting a bit known
as the dirty bit or modified bit. The contents of the main memory are updated
when this block is replaced. This is write-back or copy-back protocol.
Happens when the data to be written is present in the cache.
• If the data is not present in the cache, then a Read miss or Write miss
occurs.
Occurs when the requested data is not in the cache.
• Read miss:
 Block of words containing this requested word is transferred from the
memory.
 After the block is transferred, the desired word is forwarded to the processor.
 The desired word may also be forwarded to the processor as soon as it is
transferred without waiting for the entire block to be transferred. This is called
load-through or early-restart.

• Write-miss: Happens when the data to be written is not in the cache.

 Write-through protocol is used, then the contents of the main memory are
updated directly.
 If write-back protocol is used, the block containing the
addressed word is first brought into the cache. The desired word
is overwritten with new information.
 Mapping functions determine how memory
blocks are placed in the cache.
 A simple processor example:
 Cache consisting of 128 blocks of 16 words each.
 Total size of cache is 2048 (2K) words.
 Main memory is addressable by a 16-bit address.
 Main memory has 64K words.
 Main memory has 4K blocks of 16 words each.
 Three mapping functions:
 Direct mapping
 Associative mapping
 Set-associative mapping. SAD
Main
memory Block 0 •Block j of the main memory maps to j modulo 128 of
Block 1
the cache. 0 maps to 0, 129 maps to 1.
Cache
tag
•More than one memory block is mapped onto the same
Block 0 position in the cache. e.g., Block 0 and Block 128 both map to
Cache Block 0).
tag
Block 1 •May lead to conflict for cache blocks even if the
cache is not full.
Block 127
•Resolve the conflict by allowing new block to
Block 128 replace the old block, leading to a trivial replacement
tag
Block 127 Block 129
algorithm.
•Memory address is divided into three fields:
- Low order 4 bits determine one of the 16
words in a block.
- When a new block is brought into the cache,
Block 255 the the next 7 bits determine which cache
Tag Block Word
5 7 4 Block 256 block this new block is placed in.
- High order 5 bits determine which of the possible
Main memory address Block 257
32 blocks is currently present in the cache. These
Offset (4 bits): Identifies the specific word
within a block (16 words per block). are tag bits.
Cache Index (7 bits): Identifies the specific
cache block (128 blocks in the cache). •Simple to implement but not very flexible.
Tag (5 bits): Differentiates which memory
block is stored in the cache block.
Block 4095
Main Block 0
memory Any main memory block can be placed in any cache block (unlike direct mapping).

Block 1
•Main memory block can be placed into any cache
Cache
tag
position.
Block 0 •Memory address is divided into two fields:
tag
Block 1 - Low order 4 bits identify the word within a block.
- High order 12 bits are tag bits identify a memory
Block 127
block when it is resident in the cache.
Block 128 •Flexible, and uses cache space efficiently.
tag
Block 127 Block 129
•Replacement algorithms can be used to replace an
existing block in the cache when the cache is full.
•Cost is higher than direct-mapped cache because of
the need to search all 128 patterns to determine
whether a given block is in the cache.
Block 255
Tag Word
12 4 Block 256

Main memory address Block 257

Offset (4 bits): Identifies the specific


word within a block (since each block
has 16 words).
Tag (12 bits): Identifies the memory
block when it is in the cache. Block 4095
Cache
Main Block 0 Blocks of cache are grouped into sets.
memory
tag Block 0 Mapping function allows a block of the main
Block 1
tag
memory to reside in any block of a specific set.
Block 1
Divide the cache into 64 sets, with two blocks per set.
tag Block 2 Memory block 0, 64, 128 etc. map to block 0, and they
tag Block 3 can occupy either of the two positions.
Block 63 Memory address is divided into three fields:
Block 64 - 6 bit field determines the set number.
tag - High order 6 bit fields are compared to the tag
Block 126 Block 65
fields of the two blocks in a set.
tag
Block 127 Set-associative mapping combination of direct and
associative mapping.
Number of blocks per set is a design parameter.
Tag Block Word
Block 127 - One extreme is to have all the blocks in one set,
Block 128 requiring no set bits (fully associative mapping).
5 7 4
- Other extreme is to have one block per set, is
Block 129
Main memory address the same as direct mapping.
Set Index: Identifies which set in the
cache the memory block belongs to
(6 bits).
Block Offset: Identifies which word
within the block (3 bits).
Tag: Identifies the specific memory
block in the cache (remaining bits). Block 4095
Describe different Mechanisms used for interfacing input and output devices.
Define Interrupts? Explain Enabling and disabling interrupts.
Differentiate between polling scheme and vectored interrupts
With diagram explain the use of DMA controllers in a computer system
Discuss Direct Memory Access (DMA) with neat diagram.
List different memory mapping functions. Explain any one mapping function
in detail.
What is Bus arbitration? Mention types of Bus arbitration? Explain any one of
them.
Why Cache memory? Discuss the same with respect to Locality of Reference,
cache hit and cache miss.

You might also like