Chapter 10: Mass-Storage Systems
Chapter 10: Mass-Storage Systems
1956
IBM RAMDAC computer
included the IBM Model 350
disk storage system
5M (7 bit) characters
50 x 24” platters
Access time = < 1 second
Solid-State Disks
Nonvolatile memory used like a hard drive
Many technology variations
Can be more reliable than HDDs
More expensive per MB
Maybe have shorter life span
Less capacity
But much faster
Busses can be too slow -> connect directly to PCI for example
No moving parts, so no seek time or rotational latency
Magnetic Tape
Was early secondary-storage medium
Evolved from open spools to cartridges
Relatively permanent and holds large quantities of data
Access time slow
Random access ~1000 times slower than disk
Mainly used for backup, storage of infrequently-used data, transfer medium
between systems
Kept in spool and wound or rewound past read-write head
Once data under head, transfer rates comparable to disk
140MB/sec and greater
200GB to 1.5TB typical storage
Common technologies are LTO-{3,4,5} and T10000
Disk Structure
Disk drives are addressed as large 1-dimensional arrays of logical blocks,
where the logical block is the smallest unit of transfer
Low-level formatting creates logical blocks on physical media
The 1-dimensional array of logical blocks is mapped into the sectors of the disk
sequentially
Sector 0 is the first sector of the first track on the outermost cylinder
Mapping proceeds in order through that track, then the rest of the tracks in
that cylinder, and then through the rest of the cylinders from outermost to
innermost
Logical to physical address should be easy
Except for bad sectors
Non-constant # of sectors per track via constant angular velocity
Disk Attachment
Host-attached storage accessed through I/O ports talking to I/O busses
SCSI itself is a bus, up to 16 devices on one cable, SCSI initiator requests
operation and SCSI targets perform tasks
Each target can have up to 8 logical units (disks attached to device controller)
FC is high-speed serial architecture
Can be switched fabric with 24-bit address space – the basis of storage area
networks (SANs) in which many hosts attach to many storage units
I/O directed to bus ID, device ID, logical unit (LUN)
Storage Array
Can just attach disks, or arrays of disks
Storage Array has controller(s), provides features to attached
host(s)
Ports to connect hosts to array
Memory, controlling software (sometimes NVRAM, etc)
A few to thousands of disks
RAID, hot spares, hot swap (discussed later)
Shared storage -> more efficiency
Features found in some file systems
Snaphots, clones, thin provisioning, replication,
deduplication, etc
Storage Area Network
Common in large storage environments
Multiple hosts attached to multiple storage arrays - flexible
Storage Area Network (Cont.)
SAN is one or more storage arrays
Connected to one or more Fibre Channel switches
Hosts also attach to the switches
Storage made available via LUN Masking from specific arrays to specific
servers
Easy to add or remove storage, add new host and allocate it storage
Over low-latency Fibre Channel fabric
Why have separate storage networks and communications networks?
Consider iSCSI, FCOE
Network-Attached Storage
Network-attached storage (NAS) is storage made available over a network rather
than over a local connection (such as a bus)
Remotely attaching to file systems
NFS and CIFS are common protocols
Implemented via remote procedure calls (RPCs) between host and storage over
typically TCP or UDP on IP network
iSCSI protocol uses IP network to carry the SCSI protocol
Remotely attaching to devices (blocks)
Disk Scheduling
The operating system is responsible for using hardware efficiently — for the disk
drives, this means having a fast access time and disk bandwidth
Minimize seek time
Seek time seek distance
Disk bandwidth is the total number of bytes transferred, divided by the total time
between the first request for service and the completion of the last transfer
Disk Scheduling (Cont.)
There are many sources of disk I/O request
OS
System processes
Users processes
I/O request includes input or output mode, disk address, memory address,
number of sectors to transfer
OS maintains queue of requests, per disk or device
Idle disk can immediately work on I/O request, busy disk means work must
queue
Optimization algorithms only make sense when a queue exists
Disk Scheduling (Cont.)
Note that drive controllers have small buffers and can manage a queue of I/O
requests (of varying “depth”)
Several algorithms exist to schedule the servicing of disk I/O requests
The analysis is true for one or many platters
We illustrate scheduling algorithms with a request queue (0-199)
Raw disk access for apps that want to do their own block management, keep OS
out of the way (databases for example)
Boot block initializes system
The bootstrap is stored in ROM
Bootstrap loader program stored in boot blocks of boot partition
Methods such as sector sparing used to handle bad blocks
Booting from a Disk in Windows
Swap-Space Management
Swap-space — Virtual memory uses disk space as an extension of main memory
Less common now due to memory capacity increases
Swap-space can be carved out of the normal file system, or, more commonly, it
can be in a separate disk partition (raw)
Swap-space management
4.3BSD allocates swap space when process starts; holds text segment (the
program) and data segment
Kernel uses swap maps to track swap-space use
Solaris 2 allocates swap space only when a dirty page is forced out of physical
memory, not when the virtual memory page is first created
File data written to swap space until write to file system requested
Other dirty pages go to swap space due to no other home
Text segment pages thrown out and reread from the file system as needed
What if a system runs out of swap space?
Some systems allow multiple swap spaces
Data Structures for Swapping on Linux Systems
RAID Structure
RAID – redundant array of inexpensive disks
multiple disk drives provides reliability via redundancy
Increases the mean time to failure
Mean time to repair – exposure time when another failure could cause data loss
Mean time to data loss based on above factors
If mirrored disks fail independently, consider disk with 1300,000 mean time to
failure and 10 hour mean time to repair
Mean time to data loss is 100, 0002 / (2 ∗ 10) = 500 ∗ 106 hours, or 57,000
years!
Frequently combined with NVRAM to improve write performance
Several improvements in disk-use techniques involve the use of multiple disks
working cooperatively
RAID (Cont.)
Disk striping uses a group of disks as one storage unit
RAID is arranged into six different levels
RAID schemes improve performance and improve the reliability of the storage
system by storing redundant data
Mirroring or shadowing (RAID 1) keeps duplicate of each disk
Striped mirrors (RAID 1+0) or mirrored stripes (RAID 0+1) provides high
performance and high reliability
Block interleaved parity (RAID 4, 5, 6) uses much less redundancy
RAID within a storage array can still fail if the array fails, so automatic
replication of the data between arrays is common
Frequently, a small number of hot-spare disks are left unallocated, automatically
replacing a failed disk and having data rebuilt onto them
RAID Levels
RAID (0 + 1) and (1 + 0)
Other Features
Regardless of where RAID implemented, other useful features can be added
Snapshot is a view of file system before a set of changes take place (i.e. at a
point in time)
More in Ch 12
Replication is automatic duplication of writes between separate sites
For redundancy and disaster recovery
Can be synchronous or asynchronous
Hot spare disk is unused, automatically used by RAID production if a disk fails
to replace the failed disk and rebuild the RAID set if possible
Decreases mean time to repair
Extensions
RAID alone does not prevent or detect data corruption or other errors, just disk
failures
Solaris ZFS adds checksums of all data and metadata
Checksums kept with pointer to object, to detect if object is the right one and
whether it changed
Can detect and correct data and metadata corruption
ZFS also removes volumes, partitions
Disks allocated in pools
Filesystems with a pool share that pool, use and release space like
malloc() and free() memory allocate / release calls
ZFS Checksums All Metadata and Data
Traditional and Pooled Storage
Stable-Storage Implementation
Write-ahead log scheme requires stable storage
Stable storage means data is never lost (due to failure, etc.)
To implement stable storage:
Replicate information on more than one nonvolatile storage media with
independent failure modes
Update information in a controlled manner to ensure that we can recover the
stable data after any failure during data transfer or recovery
Disk write has 1 of 3 outcomes
1. Successful completion - The data were written correctly on disk
2. Partial failure - A failure occurred in the midst of transfer, so only some of the
sectors were written with the new data, and the sector being written during the
failure may have been corrupted
3. Total failure - The failure occurred before the disk write started, so the
previous data values on the disk remain intact
Stable-Storage Implementation (Cont.)
If failure occurs during block write, recovery procedure restores block to consistent state
System maintains 2 physical blocks per logical block and does the following:
1. Write to 1st physical
2. When successful, write to 2nd physical
3. Declare complete only after second write completes successfully
Systems frequently use NVRAM as one physical to accelerate
End of Chapter 10