Chapter 3 Input Output Devices-sum-w3
Chapter 3 Input Output Devices-sum-w3
Scheduling
1
Agenda
• I/O Devices
• Organization of the I/O Function
• Operating System Design Issues
• I/O Buffering
• Disk Scheduling
• RAID
• Disk Cache
• Linux I/O 2
I/O Devices
3
Categories of I/O Devices
External devices that engage in I/O with
computer systems can be roughly grouped into
three categories:
• Human readable
– Used to communicate with the user
– Printers
– Video display terminals
• Display
• Keyboard
• Mouse
4
• Machine readable
– Used to communicate with electronic
equipment
– Disk and tape drives
– Sensors
– Controllers
– Actuators
5
• Communication
– Used to communicate with remote devices
– Digital line drivers
– Modems
6
Differences in I/O Devices
• Data rate
– May be differences of several orders of
magnitude between the data transfer
rates
7
8
Differences in I/O Devices
• Application
– Disk used to store files requires file
management software
– Disk used to store virtual memory pages
needs special hardware and software to
support it
– Terminal used by system administrator
may have a higher priority
9
Differences in I/O Devices
• Complexity of control
• Unit of transfer
– Data may be transferred as a stream of
bytes for a terminal or in larger blocks for
a disk
• Data representation
– Encoding schemes
• Error conditions
– Devices respond to errors differently
10
Organization of the I/O Function
11
Performing I/O
• Programmed I/O
– Process is busy-waiting for the operation
to complete
• Interrupt-driven I/O
– I/O command is issued
– Processor continues executing
instructions
– I/O module sends an interrupt when done
12
Performing I/O
• Direct Memory Access (DMA)
– DMA module controls exchange of data
between main memory and the I/O device
– Processor interrupted only after entire
block has been transferred
13
Relationship Among Techniques
14
Evolution of the I/O Function
• Processor directly controls a peripheral
device
• Controller or I/O module is added
– Processor uses programmed I/O without
interrupts
– Processor does not need to handle details
of external devices
15
Evolution of the I/O Function
• Controller or I/O module with interrupts
– Processor does not spend time waiting for
an I/O operation to be performed
• Direct Memory Access
– Blocks of data are moved into memory
without involving the processor
– Processor involved at beginning and end
only
16
Evolution of the I/O Function
• I/O module is a separate processor
• I/O processor
– I/O module has its own local memory
– Its a computer in its own right
17
Direct Memory Access
• Processor delegates I/O operation to
the DMA module
• DMA module transfers data directly to
or form memory
• When complete DMA module sends an
interrupt signal to the processor
18
DMA
19
•The DMA technique works as follows.
20
• The address of the I/O device involved,
communicated on the data lines.
21
•The processor then continues with other
work.
•It has delegated this I/O operation to the DMA
module.
•The DMA module transfers the entire block of
data, one word at a time, directly to or from
memory, without going through the processor.
•When the transfer is complete, the DMA
module sends an interrupt signal to the
processor.
•Thus, the processor is involved only at the
beginning and end of the transfer. 22
DMA Configurations
DMA mechanism can be configured in a variety of
ways.
23
DMA Configurations
24
Operating System Design Issues
25
Operating System Design
Issues
• Efficiency
– Most I/O devices extremely slow
compared to main memory
– Use of multiprogramming allows for some
processes to be waiting on I/O while
another process executes
– I/O cannot keep up with processor speed
– Swapping is used to bring in additional
Ready processes which is an I/O
operation
26
Operating System Design
Issues
• Generality
– Desirable to handle all I/O devices in a
uniform manner
– Hide most of the details of device I/O in
lower-level routines so that processes and
upper levels see devices in general terms
such as read, write, open, close, lock,
unlock
27
Logical
Structure
of the I/O
Function
28
The details of the organization will depend
on the type of device and the application.
The three most important logical structures
are presented in the pervious figure.
29
Let us consider the simplest case first, that of a local
peripheral device that communicates in a simple
fashion, such as a stream of bytes or records ( Figure
11.4a ).
The following layers are involved:
• Logical I/O:
The logical I/O module deals with the device as a
logical resource and is not concerned with the
details of actually controlling the device.
The logical I/O module is concerned with managing
general I/O functions on behalf of user processes,
allowing them to deal with the device in terms of a
device identifier and simple commands such as open,
close, read, and write.
30
Device I/O:
The requested operations and data (buffered
characters, records, etc.) are converted into
appropriate sequences of I/O instructions, channel
commands, and controller orders. Buffering
techniques may be used to improve utilization.
33
•Physical organization: Just as virtual memory
addresses must be converted into physical main
memory addresses, taking into account the
segmentation and paging structure, logical references
to files and records must be converted to physical
secondary storage addresses, taking into account
the physical track and sector structure of the
secondary storage device.
34
Buffering
35
I/O Buffering
• Reasons for buffering
– Processes must wait for I/O to complete
before proceeding
– Certain pages must remain in main
memory during I/O
36
I/O Buffering
• Block-oriented
– Information is stored in fixed sized blocks
– Transfers are made a block at a time
– Used for disks and tapes
• Stream-oriented
– Transfer information as a stream of bytes
– Used for terminals, printers,
communication ports, mouse and other
pointing devices, and most other devices
that are not secondary storage
37
Single Buffer
• Operating system assigns a buffer in
main memory for an I/O request
• Block-oriented
– Input transfers made to buffer
– Block moved to user space when needed
– Another block is moved into the buffer
• Read ahead
38
Single Buffer
• Stream-oriented
– Used a line at time
– User input from a terminal is one line at a
time with carriage return signaling the end
of the line
– Output to the terminal is one line at a time
39
I/O Buffering
40
Double Buffer
• Use two system buffers instead of one
• A process can transfer data to or from one
buffer while the operating system empties or
fills the other buffer
41
Circular Buffer
• More than two buffers are used
• Each individual buffer is one unit in a circular
buffer
• Used when I/O operation must keep up with
process
42
Disk Scheduling
43
Disk Performance
Parameters
To read or write, the disk head must be
positioned at the desired track and at the
beginning of the desired sector
• Seek time
– Time it takes to position the head at the desired
track
• Rotational delay or rotational latency
– Time its takes for the beginning of the sector to
reach the head
44
Disk Performance
Parameters
• Access time
– Sum of seek time and rotational delay
– The time it takes to get in position to read
or write
• Data transfer occurs as the sector
moves under the head
45
Timing of a Disk I/O Transfer
46
47
Agenda
I/O Devices
Organization of the I/O Function
Operating System Design Issues
I/O Buffering
• Disk Scheduling
• RAID
• Disk Cache
• Linux I/O 48
Disk Scheduling Policies
• Seek time is the reason for differences in
performance
• For a single disk there will be a number of
I/O requests
• If requests are selected randomly, we will
poor performance
49
Disk Scheduling Policies
• First-in, first-out (FIFO)
– Process request sequentially
– Fair to all processes
– Approaches random scheduling in performance if
there are many processes
50
With FIFO, if there are only a few processes
that require access and if many of the
requests are to clustered file sectors, then we
can hope for good performance.
51
Disk Scheduling Policies
• Priority
– Goal is not to optimize disk use but to meet
other objectives
– Short batch jobs may have higher priority
– Provide good interactive response time
52
However, longer jobs may have to wait
excessively long times.
54
Disk Scheduling Policies
• SCAN (elevator algorithm)
– Arm moves in one direction only, satisfying all
outstanding requests until it reaches the last
track in that direction
– Direction is reversed
55
SCAN not exploit locality as well as SSTF.
57
Disk Scheduling Policies
• N-step-SCAN
– Segments the disk request queue into
subqueues of length N
– Subqueues are processed one at a time,
using SCAN
– New requests added to other queue when
queue is processed
• FSCAN
– Two queues
– One queue is empty for new requests
58
Disk Scheduling Algorithms
59
RAID
• Redundant Array of Independent Disks
• Set of physical disk drives viewed by the
operating system as a single logical drive
• Data are distributed across the physical
drives of an array
• Redundant disk capacity is used to store
parity information
60
RAID 0 (non-redundant)
61
RAID 1 (mirrored)
62
RAID 2 (redundancy through
Hamming code)
63
RAID 3 (bit-interleaved
parity)
64
RAID 4 (block-level parity)
65
RAID 5 (block-level
distributed parity)
66
RAID 6 (dual redundancy)
67
68
Disk Cache
• Buffer in main memory for disk sectors
• Contains a copy of some of the sectors on
the disk
69
Least Recently Used (LRU)
• The block that has been in the cache the
longest with no reference to it is replaced
• The cache consists of a stack of blocks
• Most recently referenced block is on the
top of the stack
• When a block is referenced or brought
into the cache, it is placed on the top of
the stack
70
(LRU)
• The block on the bottom of the stack is
removed when a new block is brought in
• Blocks don’t actually move around in main
memory
• A stack of pointers is used
71
Least Frequently Used
(LFU)
• The block that has experienced the fewest
references is replaced
• A counter is associated with each block
• Counter is incremented each time block
accessed
• Block with smallest count is selected for
replacement
• Some blocks may be referenced many times
in a short period of time and the reference
count is misleading
72
To overcome this difficulty with LFU, a technique
known as frequency-based replacement
73
Linux I/O
• Elevator scheduler
– Maintains a single queue for disk read and
write requests
– Keeps list of requests sorted by block number
– Drive moves in a single direction to satisfying
each request
74
Linux I/O
• Deadline scheduler
– Uses three queues
• Incoming requests
• Read requests go to the tail of a FIFO queue
• Write requests go to the tail of a FIFO queue
– Each request has an expiration time
75
Linux I/O
76
Sheet 3
Review Questions page 517
Problem
1, 2, 3,
7, 8, 9, 10, 12
77