asda
asda
Systems
Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018
Chapter 11: Mass-Storage Systems
Overview of Mass Storage
Structure
HDD Scheduling
NVM Scheduling
Error Detection and Correction
Storage Device Management
Swap-Space Management
Storage Attachment
RAID Structure
Operating System Concepts – 10th Edition 11.2 Silberschatz, Galvin and Gagne ©2018
Objectives
Operating System Concepts – 10th Edition 11.3 Silberschatz, Galvin and Gagne ©2018
Overview of Mass Storage Structure
Bulk of secondary storage for modern computers is hard disk
drives
(HDDs) and nonvolatile memory (NVM) devices
HDDs spin platters of magnetically-coated material under
moving read-write heads
• Drives rotate at 60 to 250 times per second
• Transfer rate is rate at which data flow between
drive and computer
• Positioning time (random-access time) is time to move disk
arm to desired cylinder (seek time) and time for desired
sector to rotate under the disk head (rotational latency)
• Head crash results from disk head making contact with
the disk surface-- That’s bad
Disks can be removable
Operating System Concepts – 10th Edition 11.4 Silberschatz, Galvin and Gagne ©2018
Moving-head Disk Mechanism
Operating System Concepts – 10th Edition 11.5 Silberschatz, Galvin and Gagne ©2018
Hard Disk Drives
Operating System Concepts – 10th Edition 11.7 Silberschatz, Galvin and Gagne ©2018
The First Commercial Disk Drive
1956
IBM RAMDAC computer
included the IBM Model 350
disk storage system
5M (7 bit) characters
50 x 24” platters
Access time = < 1
second
Operating System Concepts – 10th Edition 11.8 Silberschatz, Galvin and Gagne ©2018
Nonvolatile Memory Devices
If disk-drive like, then called solid-state disks (SSDs)
Other forms include USB drives (thumb drive, flash drive),
DRAM disk replacements, surface-mounted on
motherboards, and main storage in devices like
smartphones
Can be more reliable than HDDs
More expensive per MB
Maybe have shorter life span – need careful management
Less capacity
But much faster
Busses can be too slow -> connect directly to PCI for example
No moving parts, so no seek time or rotational latency
Operating System Concepts – 10th Edition 11.9 Silberschatz, Galvin and Gagne ©2018
Nonvolatile Memory Devices
Have characteristics that
present challenges
Read and written in “page”
increments (think sector) but
can’t overwrite in place
• Must first be erased, and
erases happen in larger
”block” increments
• Can only be erased a limited
number of times before worn
out –
~ 100,000
• Life span measured in
drive writes per day
(DWPD)
A 1TB NAND drive with
rating of 5DWPD is
expected to have 5TB per
Operating System Concepts – day
th written within 11.10
10 Edition Silberschatz, Galvin and Gagne ©2018
NAND Flash Controller Algorithms
With no overwrite, pages end up with mix of valid and invalid
data
To track which logical blocks are valid, controller
maintains flash translation layer (FTL) table
Also implements garbage collection to free invalid page
space
Allocates overprovisioning to provide working space for
GC
Each cell has lifespan, so wear leveling needed to write
equally to all cells
Operating System Concepts – 10th Edition 11.11 Silberschatz, Galvin and Gagne ©2018
Volatile Memory
DRAM frequently used as mass-storage device
• Not technically secondary storage because volatile, but
can have file systems, be used like very fast secondary
storage
RAM drives (with many names, including RAM disks) present
as raw block devices, commonly file system formatted
Computers have buffering, caching via RAM, so why RAM
drives?
• Caches / buffers allocated / managed by programmer,
operating system, hardware
• RAM drives under user control
• Found in all major operating systems
Linux /dev/ram, macOS diskutil to create them,
Linux
/tmp of file system type
tmpfs
Used as high speed temporary
Operating System Concepts – 10th Edition storage
11.12 Silberschatz, Galvin and Gagne ©2018
Magnetic Tape
Operating System Concepts – 10th Edition 11.13 Silberschatz, Galvin and Gagne ©2018
Disk Attachment
Host-attached storage accessed through I/O ports talking to I/O
busses
Several busses available, including advanced technology
attachment (ATA), serial ATA (SATA), eSATA, serial attached SCSI
(SAS), universal serial bus (USB), and fibre channel (FC).
Most common is SATA
Because NVM much faster than HDD, new fast interface
for NVM called NVM express (NVMe), connecting directly
to PCI bus
Data transfers on a bus carried out by special electronic
processors called controllers (or host-bus adapters, HBAs)
• Host controller on the computer end of the bus, device
controller on device end
• Computer places command on host controller, using
memory- mapped I/O ports
Host controller sends messages to device
controller
Data transferred via11.15
Operating System Concepts – 10th Edition DMA between device and Galvin and Gagne ©2018
Silberschatz,
Address Mapping
Disk drives are addressed as large 1-dimensional arrays of
logical blocks, where the logical block is the smallest unit
of transfer
• Low-level formatting creates logical blocks on physical
media
The 1-dimensional array of logical blocks is mapped into the
sectors of the disk sequentially
• Sector 0 is the first sector of the first track on the
outermost cylinder
• Mapping proceeds in order through that track, then the
rest of the tracks in that cylinder, and then through the
rest of the cylinders from outermost to innermost
• Logical to physical address should be easy
Except for bad sectors
Non-constant # of sectors per track via constant
angular velocity
Operating System Concepts – 10th Edition 11.16 Silberschatz, Galvin and Gagne ©2018
HDD Scheduling
The operating system is responsible for using hardware
efficiently — for the disk drives, this means having a fast
access time and disk bandwidth
Minimize seek time
Seek time seek distance
Disk bandwidth is the total number of bytes transferred,
divided by the total time between the first request for
service and the completion of the last transfer
Operating System Concepts – 10th Edition 11.17 Silberschatz, Galvin and Gagne ©2018
Disk Scheduling (Cont.)
There are many sources of disk I/O request
• OS
• System processes
• Users processes
I/O request includes input or output mode, disk address,
memory address, number of sectors to transfer
OS maintains queue of requests, per disk or device
Idle disk can immediately work on I/O request, busy disk
means work must queue
• Optimization algorithms only make sense when a queue
exists
In the past, operating system responsible for queue
management, disk drive head scheduling
• Now, built into the storage devices, controllers
• Just provide LBAs, handle sorting of requests
Some of the algorithms they use described next
Operating System Concepts – 10th Edition 11.18 Silberschatz, Galvin and Gagne ©2018
Disk Scheduling (Cont.)
Note that drive controllers have small buffers and can
manage a queue of I/O requests (of varying “depth”)
Several algorithms exist to schedule the servicing of
disk I/O requests
The analysis is true for one or many platters
We illustrate scheduling algorithms with a request
queue (0-199)
98, 183, 37, 122, 14, 124, 65, 67
Head pointer 53
Operating System Concepts – 10th Edition 11.19 Silberschatz, Galvin and Gagne ©2018
FCFS
Illustration shows total head movement of 640
cylinders
Operating System Concepts – 10th Edition 11.20 Silberschatz, Galvin and Gagne ©2018
SCAN
The disk arm starts at one end of the disk, and moves
toward the other end, servicing requests until it gets to the
other end of the disk, where the head movement is
reversed and servicing continues.
SCAN algorithm Sometimes called the elevator algorithm
Illustration shows total head movement of 208 cylinders
But note that if requests are uniformly dense, largest
density at other end of disk and those wait the longest
Operating System Concepts – 10th Edition 11.21 Silberschatz, Galvin and Gagne ©2018
SCAN (Cont.)
Operating System Concepts – 10th Edition 11.22 Silberschatz, Galvin and Gagne ©2018
C-SCAN
Provides a more uniform wait time than SCAN
The head moves from one end of the disk to the other,
servicing requests as it goes
• When it reaches the other end, however, it immediately
returns to the beginning of the disk, without servicing
any requests on the return trip
Treats the cylinders as a circular list that wraps around
from the last cylinder to the first one
Total number of cylinders?
Operating System Concepts – 10th Edition 11.23 Silberschatz, Galvin and Gagne ©2018
C-SCAN (Cont.)
Operating System Concepts – 10th Edition 11.24 Silberschatz, Galvin and Gagne ©2018
Selecting a Disk-Scheduling Algorithm
SSTF is common and has a natural appeal
SCAN and C-SCAN perform better for systems that place a heavy load on
the disk
• Less starvation, but still possible
To avoid starvation Linux implements deadline scheduler
• Maintains separate read and write queues, gives read priority
Because processes more likely to block on read than write
• Implements four queues: 2 x read and 2 x write
1 read and 1 write queue sorted in LBA order,
essentially implementing C-SCAN
1 read and 1 write queue sorted in FCFS order
All I/O requests sent in batch sorted in that queue’s order
After each batch, checks if any requests in FCFS older than
configured age (default 500ms)
– If so, LBA queue containing that request is selected for next
batch of I/O
In RHEL 7 also NOOP and completely fair queueing scheduler (CFQ)
also
available,
Operating System Concepts – defaults
th 10 Edition vary by storage
11.25device Silberschatz, Galvin and Gagne ©2018
NVM Scheduling
No disk heads or rotational latency but still room for
optimization
In RHEL 7 NOOP (no scheduling) is used but adjacent LBA
requests are combined
• NVM best at random I/O, HDD at sequential
• Throughput can be similar
• Input/Output operations per second (IOPS) much higher
with NVM (hundreds of thousands vs hundreds)
• But write amplification (one write, causing garbage
collection and many read/writes) can decrease the
performance advantage
Operating System Concepts – 10th Edition 11.26 Silberschatz, Galvin and Gagne ©2018
Error Detection and Correction
Fundamental aspect of many parts of computing (memory,
networking, storage)
Error detection determines if there a problem has
occurred (for example a bit flipping)
• If detected, can halt the operation
• Detection frequently done via parity bit
Parity one form of checksum – uses modular arithmetic to
compute, store, compare values of fixed-length words
• Another error-detection method common in networking
is cyclic redundancy check (CRC) which uses hash
function to detect multiple-bit errors
Error-correction code (ECC) not only detects, but can correct
some errors
• Soft errors correctable, hard errors detected but not
corrected
Operating System Concepts – 10th Edition 11.27 Silberschatz, Galvin and Gagne ©2018
Storage Device Management
Low-level formatting, or physical formatting — Dividing a disk
into sectors that the disk controller can read and write
• Each sector can hold header information, plus data,
plus error correction code (ECC)
• Usually 512 bytes of data but can be selectable
To use a disk to hold files, the operating system still needs to
record its own data structures on the disk
• Partition the disk into one or more groups of
cylinders, each treated as a logical disk
• Logical formatting or “making a file system”
• To increase efficiency most file systems group blocks
into clusters
Disk I/O done in blocks
File I/O done in clusters
Operating System Concepts – 10th Edition 11.28 Silberschatz, Galvin and Gagne ©2018
Storage Device Management (cont.)
Root partition contains the OS, other partitions can
hold other Oses, other file systems, or be raw
• Mounted at boot time
• Other partitions can mount automatically or
manually
At mount time, file system consistency checked
• Is all metadata correct?
If not, fix it, try again
If yes, add to mount table, allow access
Boot block can point to boot volume or boot loader set of
blocks that contain enough code to know how to load the
kernel from the file system
• Or a boot management program for multi-os booting
Operating System Concepts – 10th Edition 11.29 Silberschatz, Galvin and Gagne ©2018
Device Storage Management (Cont.)
Operating System Concepts – 10th Edition 11.30 Silberschatz, Galvin and Gagne ©2018
Swap-Space Management
Used for moving entire processes (swapping), or pages
(paging), from DRAM to secondary storage when DRAM not
large enough for all processes
Operating system provides swap space management
• Secondary storage slower than DRAM, so important to
optimize performance
• Usually multiple swap spaces possible – decreasing I/O
load on any given device
• Best to have dedicated devices
• Can be in raw partition or a file within a file
system (for convenience of adding)
• Data structures for swapping on Linux systems:
Operating System Concepts – 10th Edition 11.31 Silberschatz, Galvin and Gagne ©2018
Storage Attachment
Computers access storage in three ways
• host-attached
• network-attached
• cloud
Host attached access through local I/O ports, using one of
several technologies
• To attach many devices, use storage busses such as USB,
firewire, thunderbolt
• High-end systems use fibre channel (FC)
High-speed serial architecture using fibre or copper
cables
Multiple hosts and storage devices can connect to the
FC fabric
Operating System Concepts – 10th Edition 11.32 Silberschatz, Galvin and Gagne ©2018
Network-Attached Storage
Network-attached storage (NAS) is storage made
available over a network rather than over a local
connection (such as a bus)
• Remotely attaching to file
systems
NFS and CIFS are common
protocols
Implemented via remote procedure calls (RPCs) between
host and storage over typically TCP or UDP on IP network
iSCSI protocol uses IP network to carry the SCSI protocol
• Remotely attaching to devices (blocks)
Operating System Concepts – 10th Edition 11.33 Silberschatz, Galvin and Gagne ©2018
Cloud Storage
Similar to NAS, provides access to storage across a
network
• Unlike NAS, accessed over the Internet or a WAN to
remote data center
NAS presented as just another file system, while cloud
storage is API based, with programs using the APIs to
provide access
• Examples include Dropbox, Amazon S3, Microsoft
OneDrive, Apple iCloud
• Use APIs because of latency and failure scenarios
(NAS
protocols wouldn’t work well)
Operating System Concepts – 10th Edition 11.34 Silberschatz, Galvin and Gagne ©2018
Storage Array
Can just attach disks, or arrays of disks
Avoids the NAS drawback of using network bandwidth
Storage Array has controller(s), provides features to
attached host(s)
• Ports to connect hosts to array
• Memory, controlling software (sometimes NVRAM, etc)
• A few to thousands of disks
• RAID, hot spares, hot swap (discussed later)
• Shared storage -> more efficiency
• Features found in some file systems
Snaphots, clones, thin provisioning, replication,
deduplication, etc
Operating System Concepts – 10th Edition 11.35 Silberschatz, Galvin and Gagne ©2018
Storage Area Network
Common in large storage environments
Multiple hosts attached to multiple storage arrays –
flexible
Operating System Concepts – 10th Edition 11.36 Silberschatz, Galvin and Gagne ©2018
Storage Area Network (Cont.)
SAN is one or more
storage arrays
• Connected to one or
more Fibre Channel
switches or
InfiniBand
Hosts (IB) to
also attach network
the
switches
Storage made available via
LUN Masking from specific
arrays to specific servers
Easy to add or remove
storage, add new host and
allocate it storage A Storage Array
Why have separate
storage networks and
communications
• Consider iSCSI,
networks?
FCOE
Operating System Concepts – 10th Edition 11.37 Silberschatz, Galvin and Gagne ©2018
RAID Structure
RAID – redundant array of inexpensive disks
• multiple disk drives provides reliability via redundancy
Increases the mean time to failure
Mean time to repair – exposure time when another failure
could cause data loss
Mean time to data loss based on above factors
If mirrored disks fail independently, consider disk with
1300,000
mean time to failure and 10 hour mean time to repair
• Mean time to data loss is 100, 0002 / (2 ∗ 10) = 500
∗ 106 hours,
or 57,000 years!
Frequently combined with NVRAM to improve write
performance
Several improvements in disk-use techniques involve
the use of multiple disks working cooperatively
Operating System Concepts – 10th Edition 11.38 Silberschatz, Galvin and Gagne ©2018
RAID (Cont.)
Disk striping uses a group of disks as one storage unit
RAID is arranged into six different levels
RAID schemes improve performance and improve the
reliability of the storage system by storing redundant
data
• Mirroring or shadowing (RAID 1) keeps duplicate of
each disk
• Striped mirrors (RAID 1+0) or mirrored stripes
(RAID 0+1) provides high performance and high
reliability
• Block interleaved parity (RAID 4, 5, 6) uses much
less redundancy
RAID within a storage array can still fail if the array
fails, so automatic replication of the data between
arrays is common
Frequently, a small number of hot-spare disks are left
unallocated, automatically replacing a failed disk and
having
Operating System data
Concepts – 10
th rebuilt onto them
Edition 11.39 Silberschatz, Galvin and Gagne ©2018
RAID Levels
Operating System Concepts – 10th Edition 11.40 Silberschatz, Galvin and Gagne ©2018
RAID (0 + 1) and (1 + 0)
Operating System Concepts – 10th Edition 11.41 Silberschatz, Galvin and Gagne ©2018
Other Features
Regardless of where RAID implemented, other useful
features can be added
Snapshot is a view of file system before a set of changes
take place (i.e. at a point in time)
• More in Ch 12
Replication is automatic duplication of writes between
separate sites
• For redundancy and disaster recovery
• Can be synchronous or asynchronous
Hot spare disk is unused, automatically used by RAID
production if a disk fails to replace the failed disk and rebuild
the RAID set if possible
• Decreases mean time to repair
Operating System Concepts – 10th Edition 11.42 Silberschatz, Galvin and Gagne ©2018
Extensions
RAID alone does not prevent or
detect data corruption or other
errors, just disk failures
Solaris ZFS adds checksums of all
data and metadata
Checksums kept with pointer to
object, to detect if object is the
right one and whether it changed
Can detect and correct data and
metadata corruption
ZFS also removes volumes, partitions
• Disks allocated in pools
• Filesystems with a pool share ZFS checksums all
that pool, use and release metadata and data
space like malloc() and
free() memory allocate /
release calls
Operating System Concepts – 10th Edition 11.43 Silberschatz, Galvin and Gagne ©2018
Traditional and Pooled Storage
Operating System Concepts – 10th Edition 11.44 Silberschatz, Galvin and Gagne ©2018
Object Storage
General-purpose computing, file systems not sufficient
for very large scale
Another approach – start with a storage pool and place
objects in it
• Object just a container of data
• No way to navigate the pool to find objects (no
directory structures, few services
• Computer-oriented, not user-oriented
Typical sequence
• Create an object within the pool, receive an
object ID
• Access object via that ID
• Delete object via that ID
Operating System Concepts – 10th Edition 11.45 Silberschatz, Galvin and Gagne ©2018
Object Storage (Cont.)
Object storage management software like Hadoop file
system (HDFS) and Ceph determine where to store
objects, manages protection
• Typically by storing N copies, across N systems, in the
object storage cluster
• Horizontally scalable
• Content addressable, unstructured
Operating System Concepts – 10th Edition 11.46 Silberschatz, Galvin and Gagne ©2018
End of Chapter 11
Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018