0% found this document useful (0 votes)
11 views

Chapter 3 Input Output Devices-sum-w3

Uploaded by

www.omarehab3710
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Chapter 3 Input Output Devices-sum-w3

Uploaded by

www.omarehab3710
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 77

I/O Management and Disk

Scheduling

1
Agenda
• I/O Devices
• Organization of the I/O Function
• Operating System Design Issues
• I/O Buffering
• Disk Scheduling
• RAID
• Disk Cache
• Linux I/O 2
I/O Devices

3
Categories of I/O Devices
External devices that engage in I/O with
computer systems can be roughly grouped into
three categories:
• Human readable
– Used to communicate with the user
– Printers
– Video display terminals
• Display
• Keyboard
• Mouse

4
• Machine readable
– Used to communicate with electronic
equipment
– Disk and tape drives
– Sensors
– Controllers
– Actuators

5
• Communication
– Used to communicate with remote devices
– Digital line drivers
– Modems

6
Differences in I/O Devices
• Data rate
– May be differences of several orders of
magnitude between the data transfer
rates

7
8
Differences in I/O Devices
• Application
– Disk used to store files requires file
management software
– Disk used to store virtual memory pages
needs special hardware and software to
support it
– Terminal used by system administrator
may have a higher priority

9
Differences in I/O Devices
• Complexity of control
• Unit of transfer
– Data may be transferred as a stream of
bytes for a terminal or in larger blocks for
a disk
• Data representation
– Encoding schemes
• Error conditions
– Devices respond to errors differently
10
Organization of the I/O Function

11
Performing I/O
• Programmed I/O
– Process is busy-waiting for the operation
to complete
• Interrupt-driven I/O
– I/O command is issued
– Processor continues executing
instructions
– I/O module sends an interrupt when done

12
Performing I/O
• Direct Memory Access (DMA)
– DMA module controls exchange of data
between main memory and the I/O device
– Processor interrupted only after entire
block has been transferred

13
Relationship Among Techniques

14
Evolution of the I/O Function
• Processor directly controls a peripheral
device
• Controller or I/O module is added
– Processor uses programmed I/O without
interrupts
– Processor does not need to handle details
of external devices

15
Evolution of the I/O Function
• Controller or I/O module with interrupts
– Processor does not spend time waiting for
an I/O operation to be performed
• Direct Memory Access
– Blocks of data are moved into memory
without involving the processor
– Processor involved at beginning and end
only

16
Evolution of the I/O Function
• I/O module is a separate processor
• I/O processor
– I/O module has its own local memory
– Its a computer in its own right

17
Direct Memory Access
• Processor delegates I/O operation to
the DMA module
• DMA module transfers data directly to
or form memory
• When complete DMA module sends an
interrupt signal to the processor

18
DMA

19
•The DMA technique works as follows.

•When the processor wishes to read or write


a block of data, it issues a command to the
DMA module by sending to the DMA module
the following information:

• Whether a read or write is requested,


using the read or write control line between
the processor and the DMA module

20
• The address of the I/O device involved,
communicated on the data lines.

• The starting location in memory to read


from or write to, communicated on the data
lines and stored by the DMA module in its
address register.

• The number of words to be read or written,


again communicated via the data lines and
stored in the data count register

21
•The processor then continues with other
work.
•It has delegated this I/O operation to the DMA
module.
•The DMA module transfers the entire block of
data, one word at a time, directly to or from
memory, without going through the processor.
•When the transfer is complete, the DMA
module sends an interrupt signal to the
processor.
•Thus, the processor is involved only at the
beginning and end of the transfer. 22
DMA Configurations
DMA mechanism can be configured in a variety of
ways.

23
DMA Configurations

24
Operating System Design Issues

25
Operating System Design
Issues
• Efficiency
– Most I/O devices extremely slow
compared to main memory
– Use of multiprogramming allows for some
processes to be waiting on I/O while
another process executes
– I/O cannot keep up with processor speed
– Swapping is used to bring in additional
Ready processes which is an I/O
operation
26
Operating System Design
Issues
• Generality
– Desirable to handle all I/O devices in a
uniform manner
– Hide most of the details of device I/O in
lower-level routines so that processes and
upper levels see devices in general terms
such as read, write, open, close, lock,
unlock

27
Logical
Structure
of the I/O
Function

28
The details of the organization will depend
on the type of device and the application.
The three most important logical structures
are presented in the pervious figure.

However, the general principles are valid,


and most operating systems approach I/O in
approximately this way.

29
Let us consider the simplest case first, that of a local
peripheral device that communicates in a simple
fashion, such as a stream of bytes or records ( Figure
11.4a ).
The following layers are involved:
• Logical I/O:
The logical I/O module deals with the device as a
logical resource and is not concerned with the
details of actually controlling the device.
The logical I/O module is concerned with managing
general I/O functions on behalf of user processes,
allowing them to deal with the device in terms of a
device identifier and simple commands such as open,
close, read, and write.
30
Device I/O:
The requested operations and data (buffered
characters, records, etc.) are converted into
appropriate sequences of I/O instructions, channel
commands, and controller orders. Buffering
techniques may be used to improve utilization.

Scheduling and control:


The actual queueing and scheduling of I/O operations
occurs at this layer, as well as the control of the
operations. Thus, interrupts are handled at this layer
and I/O status is collected and reported. This is the
layer of software that actually interacts with the I/O
module and hence the device hardware. 31
For a communications device, the I/O structure (
Figure 11.4b ) looks much the same as that just
described.
The principal difference is that the logical I/O
module is replaced by a communications
architecture, which may itself consist of a number of
layers.
An example is TCP/IP.

Figure 11.4c shows a representative structure for


managing I/O on a secondary storage device that
supports a file system.
The three layers not previously discussed are as
follows: 32
•Directory management: At this layer, symbolic file
names are converted to identifiers that either
reference the file directly or indirectly through a file
descriptor or index table.
•This layer is also concerned with user operations
that affect the directory of files, such as add, delete,
and reorganize.

•File system: This layer deals with the logical


structure of files and with the operations that can be
specified by users, such as open, close, read, and
write. Access rights are also managed at this layer.

33
•Physical organization: Just as virtual memory
addresses must be converted into physical main
memory addresses, taking into account the
segmentation and paging structure, logical references
to files and records must be converted to physical
secondary storage addresses, taking into account
the physical track and sector structure of the
secondary storage device.

Allocation of secondary storage pace and main


storage buffers is generally treated at this layer as
well.

34
Buffering

35
I/O Buffering
• Reasons for buffering
– Processes must wait for I/O to complete
before proceeding
– Certain pages must remain in main
memory during I/O

36
I/O Buffering
• Block-oriented
– Information is stored in fixed sized blocks
– Transfers are made a block at a time
– Used for disks and tapes
• Stream-oriented
– Transfer information as a stream of bytes
– Used for terminals, printers,
communication ports, mouse and other
pointing devices, and most other devices
that are not secondary storage
37
Single Buffer
• Operating system assigns a buffer in
main memory for an I/O request
• Block-oriented
– Input transfers made to buffer
– Block moved to user space when needed
– Another block is moved into the buffer
• Read ahead

38
Single Buffer
• Stream-oriented
– Used a line at time
– User input from a terminal is one line at a
time with carriage return signaling the end
of the line
– Output to the terminal is one line at a time

39
I/O Buffering

40
Double Buffer
• Use two system buffers instead of one
• A process can transfer data to or from one
buffer while the operating system empties or
fills the other buffer

41
Circular Buffer
• More than two buffers are used
• Each individual buffer is one unit in a circular
buffer
• Used when I/O operation must keep up with
process

42
Disk Scheduling

43
Disk Performance
Parameters
To read or write, the disk head must be
positioned at the desired track and at the
beginning of the desired sector
• Seek time
– Time it takes to position the head at the desired
track
• Rotational delay or rotational latency
– Time its takes for the beginning of the sector to
reach the head

44
Disk Performance
Parameters
• Access time
– Sum of seek time and rotational delay
– The time it takes to get in position to read
or write
• Data transfer occurs as the sector
moves under the head

45
Timing of a Disk I/O Transfer

46
47
Agenda
I/O Devices
Organization of the I/O Function
Operating System Design Issues
I/O Buffering
• Disk Scheduling
• RAID
• Disk Cache
• Linux I/O 48
Disk Scheduling Policies
• Seek time is the reason for differences in
performance
• For a single disk there will be a number of
I/O requests
• If requests are selected randomly, we will
poor performance

49
Disk Scheduling Policies
• First-in, first-out (FIFO)
– Process request sequentially
– Fair to all processes
– Approaches random scheduling in performance if
there are many processes

50
With FIFO, if there are only a few processes
that require access and if many of the
requests are to clustered file sectors, then we
can hope for good performance.

However, this technique will often approximate


random scheduling in performance, if there
are many processes competing for the disk.

51
Disk Scheduling Policies
• Priority
– Goal is not to optimize disk use but to meet
other objectives
– Short batch jobs may have higher priority
– Provide good interactive response time

52
However, longer jobs may have to wait
excessively long times.

Furthermore, such a policy could lead to


countermeasures on the part of users, who
split their jobs into smaller pieces to beat the
system.

This type of policy tends to be poor for


database systems.
53
Disk Scheduling Policies
• Shortest Service Time First
– Select the disk I/O request that requires the
least movement of the disk arm from its
current position
– Always choose the minimum Seek time

54
Disk Scheduling Policies
• SCAN (elevator algorithm)
– Arm moves in one direction only, satisfying all
outstanding requests until it reaches the last
track in that direction
– Direction is reversed

55
SCAN not exploit locality as well as SSTF.

It is not difficult to see that the SCAN policy


favors jobs whose requests are for tracks
nearest to both innermost and outermost
tracks and favors the latest arriving jobs.

The first problem can be avoided via the C-


SCAN policy,
The second problem is addressed by the N-
step-SCAN policy.
56
Disk Scheduling Policies
• C-SCAN
– Restricts scanning to one direction only
– When the last track has been visited in one
direction, the arm is returned to the opposite end
of the disk and the scan begins again

57
Disk Scheduling Policies
• N-step-SCAN
– Segments the disk request queue into
subqueues of length N
– Subqueues are processed one at a time,
using SCAN
– New requests added to other queue when
queue is processed
• FSCAN
– Two queues
– One queue is empty for new requests
58
Disk Scheduling Algorithms

59
RAID
• Redundant Array of Independent Disks
• Set of physical disk drives viewed by the
operating system as a single logical drive
• Data are distributed across the physical
drives of an array
• Redundant disk capacity is used to store
parity information

60
RAID 0 (non-redundant)

61
RAID 1 (mirrored)

62
RAID 2 (redundancy through
Hamming code)

63
RAID 3 (bit-interleaved
parity)

64
RAID 4 (block-level parity)

65
RAID 5 (block-level
distributed parity)

66
RAID 6 (dual redundancy)

67
68
Disk Cache
• Buffer in main memory for disk sectors
• Contains a copy of some of the sectors on
the disk

69
Least Recently Used (LRU)
• The block that has been in the cache the
longest with no reference to it is replaced
• The cache consists of a stack of blocks
• Most recently referenced block is on the
top of the stack
• When a block is referenced or brought
into the cache, it is placed on the top of
the stack

70
(LRU)
• The block on the bottom of the stack is
removed when a new block is brought in
• Blocks don’t actually move around in main
memory
• A stack of pointers is used

71
Least Frequently Used
(LFU)
• The block that has experienced the fewest
references is replaced
• A counter is associated with each block
• Counter is incremented each time block
accessed
• Block with smallest count is selected for
replacement
• Some blocks may be referenced many times
in a short period of time and the reference
count is misleading

72
To overcome this difficulty with LFU, a technique
known as frequency-based replacement

Look text page 504

73
Linux I/O
• Elevator scheduler
– Maintains a single queue for disk read and
write requests
– Keeps list of requests sorted by block number
– Drive moves in a single direction to satisfying
each request

74
Linux I/O
• Deadline scheduler
– Uses three queues
• Incoming requests
• Read requests go to the tail of a FIFO queue
• Write requests go to the tail of a FIFO queue
– Each request has an expiration time

75
Linux I/O

76
Sheet 3
Review Questions page 517

Problem
1, 2, 3,
7, 8, 9, 10, 12

77

You might also like