0% found this document useful (0 votes)
6 views

Disk Scheduling

Uploaded by

coco
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Disk Scheduling

Uploaded by

coco
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Operating

Systems:
Internals Chapter 11
and
Design I/O Management
Principles and Disk Scheduling
Seventh Edition
By William Stallings
Categories of I/O Devices
External devices that engage in I/O with computer
systems can be grouped into three categories:

Human readable

• suitable for communicating with the computer user


• printers, terminals, video display, keyboard, mouse

Machine readable

• suitable for communicating with electronic equipment


• disk drives, USB keys, sensors, controllers

Communication

• suitable for communicating with remote devices


• modems, digital line drivers
Differences in I/O Devices
◼ Devices differ in a number of areas:
Data Rate
• there may be differences of magnitude between the data transfer rates

Application
• the use to which a device is put has an influence on the software

Complexity of Control
• the effect on the operating system is filtered by the complexity of the I/O module that controls the device

Unit of Transfer
• data may be transferred as a stream of bytes or characters or in larger blocks

Data Representation
• different data encoding schemes are used by different devices

Error Conditions
• the nature of errors, the way in which they are reported, their consequences, and
the available range of responses differs from one device to another
Data Rates
◼ The actual details of disk I/O

Disk
operation depend on the:
◼ computer system

Performance ◼ operating system

◼ nature of the I/O

Parameters channel and disk


controller hardware
Positioning the
Read/Write Heads
◼ When the disk drive is operating, the disk is rotating at constant speed
◼ To read or write the head must be positioned at the desired track and
at the beginning of the desired sector on that track
◼ Track selection involves moving the head in a movable-head system or
electronically selecting one head on a fixed-head system
◼ On a movable-head system the time it takes to position the head at the
track is known as seek time
◼ The time it takes for the beginning of the sector to reach the head is
known as rotational delay
◼ The sum of the seek time and the rotational delay equals the access
time
Table 11.2 Comparison of Disk Scheduling Algorithms
First-In, First-Out (FIFO)
◼ Processes in sequential order

◼ Fair to all processes

◼ Approximates random scheduling in performance


if there are many processes competing for the disk
Table 11.3 Disk Scheduling Algorithms
Priority (PRI)
◼ Control of the scheduling is outside the control of disk management
software

◼ Goal is not to optimize disk utilization but to meet other objectives

◼ Short batch jobs and interactive jobs are given higher priority

◼ Provides good interactive response time

◼ Longer jobs may have to wait an excessively long time

◼ A poor policy for database systems


◼ Select the disk I/O request
Shortest Service that requires the least
movement of the disk arm
Time First from its current position

(SSTF) ◼ Always choose the


minimum seek time
◼ Also known as the elevator algorithm

◼ Arm moves in one direction only

SCAN
◼ satisfies all outstanding requests until it
reaches the last track in that direction
then the direction is reversed

◼ Favors jobs whose requests are for tracks


nearest to both innermost and outermost
tracks
◼ Restricts scanning to one

C-SCAN ◼
direction only

When the last track has been


(Circular SCAN) visited in one direction, the arm
is returned to the opposite end of
the disk and the scan begins
again
N-Step-SCAN
◼ Segments the disk request queue into subqueues of length N

◼ Subqueues are processed one at a time, using SCAN

◼ While a queue is being processed new requests must be added to


some other queue

◼ If fewer than N requests are available at the end of a scan, all of


them are processed with the next scan
FSCAN
◼ Uses two subqueues

◼ When a scan begins, all of the requests are in one of the queues,
with the other empty

◼ During scan, all new requests are put into the other queue

◼ Service of new requests is deferred until all of the old requests have
been processed
RAID
RAID is a set of
◼ Redundant Array of Independent physical disk drives
viewed by the
Disks operating system as a
single logical drive
◼ Consists of seven levels, zero
through six
Design
architectures
share three
characteristics:

redundant disk capacity data are


is used to store parity distributed across
information, which the physical drives
guarantees data of an array in a
recoverability in case of scheme known as
a disk failure striping
Table 11.4 RAID Levels
◼ Not a true RAID because it does not

RAID
include redundancy to improve
performance or provide data protection

Level 0 ◼ User and system data are distributed


across all of the disks in the array
◼ Logical disk is divided into strips
◼ Redundancy is achieved by the simple
expedient of duplicating all the data
RAID ◼ There is no “write penalty”

Level 1 ◼ When a drive fails the data may still be


accessed from the second drive
◼ Principal disadvantage is the cost
◼ Makes use of a parallel access
technique
RAID ◼ Data striping is used

Level 2 ◼


Typically a Hamming code is used
Effective choice in an environment in
which many disk errors occur
◼ Requires only a single redundant disk,
no matter how large the disk array

RAID ◼ Employs parallel access, with data


distributed in small strips
Level 3 ◼ Can achieve very high data transfer
rates
◼ Makes use of an independent access
technique

RAID ◼ A bit-by-bit parity strip is calculated across


corresponding strips on each data disk,
and the parity bits are stored in the
Level 4 ◼
corresponding strip on the parity disk
Involves a write penalty when an I/O write
request of small size is performed
◼ Similar to RAID-4 but distributes the
parity bits across all disks

RAID ◼ Typical allocation is a round-robin


scheme

Level 5 ◼ Has the characteristic that the loss of


any one disk does not result in data loss
◼ Two different parity calculations are
carried out and stored in separate blocks
on different disks
RAID ◼ Provides extremely high data availability

Level 6 ◼ Incurs a substantial write penalty


because each write affects two parity
blocks
Disk Cache
◼ Cache memory is used to apply to a memory that is smaller and faster than
main memory and that is interposed between main memory and the
processor

◼ Reduces average memory access time by exploiting the principle of locality

◼ Disk cache is a buffer in main memory for disk sectors

◼ Contains a copy of some of the sectors on the disk

the request is satisfied


if YES
when an I/O request is via the cache
made for a particular sector,
a check is made to
determine if the sector is in
the disk cache the requested sector
if NO is read into the disk
cache from the disk
Least Recently Used
(LRU)
◼ Most commonly used algorithm that deals with the design issue of
replacement strategy

◼ The block that has been in the cache the longest with no reference
to it is replaced

◼ A stack of pointers reference the cache


◼ most recently referenced block is on the top of the stack
◼ when a block is referenced or brought into the cache, it is placed on the
top of the stack
Least Frequently Used
(LFU)
◼ The block that has experienced the fewest references is replaced

◼ A counter is associated with each block

◼ Counter is incremented each time block is accessed

◼ When replacement is required, the block with the smallest count is


selected
Summary
◼ I/O architecture is the computer system’s interface to the outside world

◼ I/O functions are generally broken up into a number of layers

◼ A key aspect of I/O is the use of buffers that are controlled by I/O utilities rather
than by application processes

◼ Buffering smoothes out the differences between the speeds

◼ The use of buffers also decouples the actual I/O transfer from the address space of
the application process

◼ Disk I/O has the greatest impact on overall system performance

◼ Two of the most widely used approaches are disk scheduling and the disk cache

◼ A disk cache is a buffer, usually kept in main memory, that functions as a cache of
disk block between disk memory and the rest of main memory

You might also like