0% found this document useful (0 votes)
3 views47 pages

CA Chapter 8

The document discusses Input-Output (I/O) organization in computer architecture, detailing how computers communicate with peripheral devices such as keyboards and printers. It covers various types of I/O devices, I/O commands, and modes of data transfer including Programmed I/O, Interrupt-Driven I/O, and Direct Memory Access (DMA). Additionally, it highlights the importance of efficient I/O routines, error handling, and the role of DMA controllers in optimizing data transfer without CPU involvement.

Uploaded by

ananyahn2010
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views47 pages

CA Chapter 8

The document discusses Input-Output (I/O) organization in computer architecture, detailing how computers communicate with peripheral devices such as keyboards and printers. It covers various types of I/O devices, I/O commands, and modes of data transfer including Programmed I/O, Interrupt-Driven I/O, and Direct Memory Access (DMA). Additionally, it highlights the importance of efficient I/O routines, error handling, and the role of DMA controllers in optimizing data transfer without CPU involvement.

Uploaded by

ananyahn2010
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

Unit 3

Chapter 8
INPUT OUTPUT
Input-Output Organization

Input-output (I/O) organization is a critical aspect of computer architecture,


involving the mechanisms through which a computer system communicates with
external devices such as keyboards, monitors, printers, storage devices, and
network interfaces. Effective I/O organization ensures efficient data transfer
between the CPU, memory, and peripheral devices, which is crucial for overall
system performance.
Peripheral devices

Peripheral device, also known as peripheral, computer peripheral,input-output

device, or input/output device, any of various devices (including sensors) used to

enter information and instructions into a computer for storage or processing and

to deliver the processed data to a human operator or, in some cases, a machine

controlled by the computer. Such devices make up the peripheral equipment of

modern digital computer systems.


Storage Devices:
Devices that store data persistently (e.g., hard drives, SSDs, USB flash drives).

Communication Devices:
Devices that enable data exchange between computers (e.g., network cards, modems).

Input Devices:
Devices that send data to the computer (e.g., keyboard, mouse, scanner).

Output Devices:
Devices that receive data from the computer (e.g., monitor, printer, speakers).
Input-Output Interface

Input-Output Interface is used as a method which helps in transferring of information


between the internal storage devices i.e. memory and the external peripheral device .
A peripheral device is that which provide input and output for the computer, it is also called
Input-Output devices.
For Example: A keyboard and mouse provide Input to the computer are called input devices
while a monitor and printer that provide output to the computer are called output devices
The major differences are as follows:

The nature of peripheral devices is electromagnetic and electro-mechanical. The nature of


the CPU is electronic. There is a lot of difference in the mode of operation of both peripheral
devices and CPU.
There is also a synchronization mechanism because the data transfer rate of peripheral
devices are slow than CPU.
In peripheral devices, data code and formats are differ from the format in the CPU and
memory.
The operating mode of peripheral devices are different and each may be controlled so as not
to disturb the operation of other peripheral devices connected to CPU.
I/O BUS AND INTERFACE MODULES
The data bus, address bus and control bus that arise out of the processor and are intended to
communicate with I/O devices are called I/O bus.
The communication link between the processor and several peripherals is shown in the given
figure.
The I/O bus is connected to all peripheral interfaces.
To communicate with a particular device, the processor places a device address on the
address bus.
Each interface attached to the I/O bus contains an address decoder that monitors the address
lines.
When the interface detects an address to be its own, it activates the path between the bus
and the device that it controls.
Types of I/O Commands

Control Commands: This is the function code that activates the corresponding peripherals and
informs them about what to do.
Status Commands: A status command is used to test various status conditions in the interface
and the peripheral devices like BUSY, ERROR, data available or not in the buffer etc .

Data Output Command: A data output command causes the interface to respond by transferring
the data from the processor to the peripheral. The data is sent from the CPU to the buffer of
interface after this command is provided.

Data Input Command: This command is sent by the CPU if the data is to be read from the
peripheral. After this command is issued, the data of peripheral are extracted into the buffer of
the interface and are read by the CPU.
I/O versus Memory Bus
Input Output Processor/Data Channel

An input-output processor (IOP) is a processor with direct memory access capability. In this, the
computer system is divided into a memory unit and number of processors.

Each IOP controls and manage the input-output tasks. The IOP is similar to CPU except that it
handles only the details of I/O processing. The IOP can fetch and execute its own instructions.
These IOP instructions are designed to manage I/O transfers only
Input Output Processor Diagram
Isolated I/O

Isolated I/O in which it has common bus(data and address) for I/O and memory but separate read
and write control lines for I/O.
So when CPU decode instruction then if data is for I/O then it places the address on the address
line and set I/O read or write control line on due to which data transfer occurs between CPU and
I/O.
As the address space of memory and I/O is isolated and the name is so. The address for I/O here
is called ports. Here we have different read-write instruction for both I/O and memory.
Memory-mapped I/O (MMIO)
It is a method used in computer architecture to manage communication between the CPU and
peripheral devices. In MMIO, peripheral devices are assigned specific addresses within the same
address space used for the system's RAM. This allows the CPU to control and communicate with
peripheral devices using standard memory instructions.

Key Concepts of Memory-Mapped I/O


Shared Address Space:
Both memory and I/O devices share the same address space. Each I/O device is assigned a unique
address or range of addresses.

Address Decoding:
The address decoder in the CPU differentiates between memory and I/O device addresses. When an
address within the I/O range is accessed, the corresponding device is selected.
Direct Access:
The CPU accesses I/O devices using the same instructions it uses to access memory (e.g.,
LOAD, STORE). This simplifies programming and reduces the need for specialized I/O
instructions.

Control and Status Registers:


I/O devices typically have control and status registers mapped to specific addresses. These
registers are used to configure the device, start operations, and check the status.
Example of I/O Interface
Explanation
The I/O read and write are two control lines that specify an input or output respectively.
The four registers communicate directly with the I/O device attached to the interface.
The interface may operate with an output device or with an input device or with a device that
requires both data, and output.
A command is passed to the I/O device by sending a word to the appropriate interface register.
In a system like this, the function code in the I/O bus is not needed because

* Control is sent to the control register


* Status information is received from the status register and
* Data are transfered to and from ports A and B registers.
The transfer of data, control and status informations is always via the common data bus.
The distinction between data, control, or status information is determined from the
particular interface register with which the CPU communites.
The control register receives control information from the CPU.
The interface registers communicate with CPU through the bidirectional data bus.
The address bus selects the interface unit through the chip select and the two
register select inputs.
The content of the selected register is transfered into the CPU via the data bus
when the I/O read signal is enabled.
The CPU transfers binary information into the selected register via the data bus
when the I/O write input is enabled.
Modes of Transfer
In computer systems, data transfer between the CPU and peripheral devices can occur using
various modes of transfer. These modes determine how data is moved, how control signals are
managed, and how efficiently the system operates.

The primary modes of data transfer are:

1. Programmed I/O
2. Interrupt-Driven I/O
3. Direct Memory Access (DMA)
1. Programmed I/O
Programmed I/O is the simplest mode of data transfer, where the CPU is directly involved in the
transfer process.
The CPU executes instructions to read from or write to the I/O device, and it must check the status of
the device before each operation.

Characteristics:
The CPU controls all aspects of data transfer.
The CPU continuously polls the I/O device to check if it is ready for data transfer.
This method can be inefficient as the CPU spends a significant amount of time waiting for the I/O
device.
2. Interrupt-Driven I/O
Interrupt-Driven I/O improves efficiency by allowing the CPU to execute other tasks while
waiting for the I/O device to be ready.
When the I/O device is ready, it sends an interrupt signal to the CPU, prompting it to handle
the I/O operation.

Characteristics:
The CPU is not constantly checking the status of the I/O device.
The I/O device sends an interrupt signal to the CPU when it is ready.
The CPU handles the interrupt by executing an Interrupt Service Routine (ISR).
3. Direct Memory Access (DMA)
Direct Memory Access (DMA) is a more advanced mode where a dedicated DMA controller
handles the data transfer between memory and the I/O device, bypassing the CPU. This allows for
high-speed data transfer without burdening the CPU.

Characteristics:
A DMA controller manages the data transfer process.
The CPU initiates the transfer by configuring the DMA controller.
The DMA controller performs the transfer independently of the CPU.
The CPU can continue executing other tasks while the DMA controller handles the transfer.
Example of Programmed I/O

Programmed I/O operations are the result of I/O instructions written in the computer program.

Each data item transfer is initiated by an instruction in the program. Usually, the transfer is to and
from a CPU register and peripheral.

Other instructions are needed to transfer the data to and from CPU and memory.

Transferring data under program control requires constant monitoring of the peripheral by the CPU.

Once a data transfer is initiated, the CPU is required to monitor the interface to see when a transfer
can again be made.
Examples:
Reading data from a keyboard or a mouse where the CPU continuously polls the device for
keypresses or mouse movements.
Writing data to a printer where the CPU initiates the print operation, checks the printer status,
and transfers data in small chunks.
Drawbacks:
Inefficiency: Programmed I/O can be inefficient, especially for high-speed devices or large data
transfers, as it keeps the CPU busy and may lead to a waste of processing time.
Limited Concurrency: The CPU is dedicated to managing the I/O operation, limiting its ability to
perform other tasks concurrently.
Software considerations for I/O routines

Software considerations for I/O routines are crucial in designing efficient, reliable, and
maintainable input/output operations in a computer system.
These considerations include how to manage data transfer, handle errors, optimize performance,
and ensure compatibility with various hardware devices.
Here are key aspects to consider:

Device Independence
Abstract Interfaces: Design I/O routines to work with a variety of devices through abstract interfaces.
This allows the same code to handle different types of devices without modification

Buffering
Input Buffering: Use input buffers to temporarily hold data coming from input devices. This allows
the CPU to process data at its own pace, reducing wait times.
Error Handling
Error Detection: Implement robust error detection mechanisms to identify issues such as device
failures, data corruption, or communication errors

Performance Optimization
DMA (Direct Memory Access): Utilize DMA for high-speed data transfers to reduce CPU load and
increase throughput.

Memory Management
Memory Allocation: Efficiently manage memory allocation for I/O buffers to optimize performance and
minimize fragmentation.
Cpu bus signals for DMA transfer

Direct Memory Access (DMA):


DMA basically stands for Direct Memory Access. It is a process which enables data transfer between
the Memory and the IO (Input/ Output) device without the need of or you can say without the
involvement of CPU during data transfer
Modes of DMA Transfer:

Burst Mode –

In this mode Burst of data (entire data or burst of block containing data) is transferred before CPU
takes control of the buses back from DMAC.
This is the quickest mode of DMA Transfer since at once a huge amount of data is being transferred.
Since at once only the huge amount of data is being transferred so time will be saved in huge amount.
Cycle Stealing Mode –

Slow IO device will take some time to prepare data (or word) and within that time CPU keeps the
control of the buses.
Once the data or the word is ready CPU give back control of system buses to DMAC for 1-cycle in
which the prepared word is transferred to memory.
As compared to Burst mode this mode is little bit slowest since it requires little bit of time which is
actually consumed by IO device while preparing the data.
DMA(Direct Memory Access) Controller

DMA Controller is a hardware device that allows I/O devices to directly access memory with less
participation of the processor. DMA controller needs the same old circuits of an interface to
communicate with the CPU and Input/Output devices.

Working of DMA Controller


The DMA controller registers have three registers as follows.

Address register – It contains the address to specify the desired location in memory.
Word count register – It contains the number of words to be transferred.
Control register – It specifies the transfer mode.
DMA Controller Diagram in Computer Architecture
Explanation:
The CPU initializes the DMA by sending the given information through the data bus.
The starting address of the memory block where the data is available (to read) or where data
are to be stored (to write).
It also sends word count which is the number of words in the memory block to be read or
written.
Control to define the mode of transfer such as read or write.
A control to begin the DMA transfer
Advantages of DMA Controller
Data Memory Access speeds up memory operations and data transfer.
CPU is not involved while transferring data.
DMA requires very few clock cycles while transferring data.
DMA distributes workload very appropriately.
DMA helps the CPU in decreasing its load.

Disadvantages of DMA Controller


Direct Memory Access is a costly operation because of additional operations.
DMA suffers from Cache-Coherence Problems.
DMA Controller increases the overall cost of the system.
DMA Controller increases the complexity of the software.
DMA Transfer:
Explanation

The CPU Communicates with the DMA through the address and data buses as with any interface
unit.
The DMA has its own address, which activates the DS and RS lines.
The CPU initializes the DMA through the data bus.
Once the DMA receives the start control command, it can start the transfer between the peripheral
device and the memory.
When the peripherial device sends a DMA request, the DMA controller activates the BR line,
informing the CPU
to give up the buses.
The CPU responds with its BG line, informing the DMA that its buses are disabld.
When the peripheral device receives a DMA ackoweledge
* It puts a word in the data bus for write, or
* Receives a word from the data bus for read.
Thus the DMA controlls the read or write operations and supplies the address for the
emory.
The pheripheral unit can then communicate with memory through the data bus for direct
transfer between the two units
while the CPU is momentarily disabled.
DMA Applcations

Data Acquisition Systems

Real-time signal processing applications, such as radar and sonar, use DMA to transfer data from
analog-to-digital converters (ADCs) to memory.

Multimedia and Graphics


Video Streaming: In video capture and playback systems, DMA transfers video frames from the
camera or storage to the display buffer, ensuring smooth playback without dropping frames.

Embedded Systems
Microcontrollers: In embedded systems, microcontrollers with DMA capabilities can transfer data
between peripherals (e.g., UART, SPI, I2C) and memory, freeing the CPU for other tasks.

Computer Systems
Peripheral Devices: Devices like printers, scanners, and external drives use DMA to transfer data
to and from the computer, improving data transfer rates and overall system performance.
Instruction Level Parallelism(ILP)

Instruction Level Parallelism (ILP) refers to the parallel execution of instructions from a
single program within a single processor. The primary goal of ILP is to improve performance
by executing multiple instructions simultaneously, utilizing different functional units within
the processor. ILP can be exploited through various techniques in both hardware and
software.

Techniques to Exploit ILP

Static Technique which is S/W dependent

Dynamic Technique which is H/W dependent


Data hazards
Data hazards occur in a pipeline when an instruction depends on the result of a previous instruction
in a way that affects its correct execution. These hazards can cause incorrect computations or
require the pipeline to stall until the hazard is resolved, which can degrade performance.

Types of Data Hazards


1.Read-After-Write (RAW) Hazard:
Also known as a true dependency or flow dependency.
Occurs when an instruction needs to read a register that a previous instruction is writing to
2.Write-After-Read (WAR) Hazard:
Also known as an anti-dependency.
Occurs when an instruction needs to write to a register that a previous instruction is reading from.
3.Write-After-Write (WAW) Hazard:
Also known as an output dependency.
Occurs when two instructions write to the same register.
Over Coming of Data Hazards(Scoreboarding, Tomasulo's algorithm, Branch Prediction)

Scoreboarding
The main objective of scoreboarding is to maximize the utilization of the pipeline by allowing
instructions to execute out of order when their operands and functional units are available.

Components of Scoreboarding

Instruction Status Table:


Tracks the state of each instruction in the pipeline (e.g., issued, executed, written back).

Functional Unit Status Table:


Keeps track of the availability of each functional unit and which instruction is currently using it.
Tomasulo's algorithm

Tomasulo's algorithm is a hardware-based dynamic scheduling technique that addresses


the challenges of instruction dependencies and resource conflicts in a pipelined
processor. Developed by Robert Tomasulo in 1967 for the IBM System/360 Model 91, this
algorithm dynamically schedules instructions to execute out of order when operands and
functional units are available, thereby maximizing parallelism and improving performance.
Branch prediction

Branch prediction is a technique used in computer architecture to improve the flow in the
instruction pipeline by guessing the outcome of conditional branch instructions before they are
actually evaluated. By predicting whether a branch will be taken or not, the processor can
prefetch and execute subsequent instructions without waiting for the actual decision, thus
reducing stalls and improving performance.
Limitations of ILP

Data Hazards

Control Hazards
Control hazards occur due to branch instructions that change the flow of execution.
Structural Hazards
Structural hazards occur when hardware resources are insufficient to support the desired level
of parallelism

Limitations of Software
Compilers play a crucial role in exposing ILP by reordering instructions and optimizing code.
Thank you

You might also like