0% found this document useful (0 votes)
10 views

Lect Part 2

The document discusses memory management techniques including logical vs physical address spaces, swapping, and contiguous allocation. It describes how memory management units map virtual to physical addresses. Contiguous allocation allocates each process to a single contiguous block of memory using algorithms like first fit, best fit, and worst fit.

Uploaded by

Not me
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Lect Part 2

The document discusses memory management techniques including logical vs physical address spaces, swapping, and contiguous allocation. It describes how memory management units map virtual to physical addresses. Contiguous allocation allocates each process to a single contiguous block of memory using algorithms like first fit, best fit, and worst fit.

Uploaded by

Not me
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 123

Computer Science Department

Faculty of Computers and Information Sciences


Mansoura University

MEMORY MANAGEMENT
(Part 1)
Presented By:
Dr. Abdelaziz Said
1
Agenda
Introduction.
Logical vs. Physical Address Space.
Swapping.
Contiguous Allocation.

2
Introduction
 Storage Hierarchy (Cost – Speed - Capacity).

3
Introduction
 Program must be brought to memory & placed within a process for it to
be run.

 Input queue: collection of processes on the disk that are waiting to be


brought into memory to run the program.

 The process may be moved between disk and memory during its
execution.

 Instruction execution cycle:


1. Fetch: CPU fetches instruction from memory according to the
value of PC.
2. Decode: instruction is decoded and operands may be fetched
from memory.
3. Execute: instruction is executed on operands, and result is put
back in memory. 4
Introduction
Uniprogramming system vs. Multi-programming system.

5
Memory Management
The task of Memory subdivision is carried out dynamically
by the operating system and is known as memory
management.
The part of the operating system that manages (part of) the
memory hierarchy is called the memory manager.
 keep track of which parts of memory are in use.

 allocate memory to processes when they need it, and deallocate it


when they are done.

6
Memory Management
 A simple solution to both the relocation and protection
problems: two special hardware registers, called the base
and limit registers.

 The base register holds the smallest legal physical memory


address and the limit register specifies the size of the range.
7
Memory Management
 Memory protection using base and limit registers.

8
Memory Management
 Address binding is the mapping from one address space to
another.

1. Compile time: If you know at compile time where the process will
reside in memory, then absolute code can be generated.

2. Load time: If it is not known at compile time where the process will
reside in memory, then the compiler must generate relocatable code.
In this case, final binding is delayed until load time.

3. Execution time: Binding delayed until run time if the process can be
moved during its execution from one memory segment to another.
Hardware support is needed for address mapping (e.g., base and
limit registers).

9
Logical Vs. Physical Address Space

 logical (or virtual) address.: An address generated by the CPU.

 physical address: the address seen by the memory unit .

 Logical address space: is defined as the set of all logical


addresses generated by a program.
 physical address space: the set of all physical addresses
corresponding to logical addresses generated by a program.
 The run-time mapping from virtual to physical addresses is done
by a hardware device called the memory-management unit
(MMU).
10
Memory- Management Unit (MMU)

11
Memory- Management Unit (MMU)

Advantages:
 OS can easily move a process during execution.

 OS can allow a process to grow over time.

 Simple, fast hardware: two special registers, an add, and a compare.

Disadvantages:
 Slows down hardware due to the add on every memory reference.

 Can't share memory (such as program text) between processes.

 Process is still limited to physical memory size.

 Degree of multiprogramming is very limited since all memory of all


active processes must fit in memory. . 12
Swapping
Swapping is a mechanism in which a process can be
swapped temporarily out of memory to a backing store, and
then brought back into memory for continued execution.

13
Swapping

 Backing store is fast disk large enough to accommodate


copies of all memory images for all users; it must provide
direct access to these memory images.

 Major part of swap time is transfer time where total transfer


time is directly proportional to the amount of memory
swapped.

14
Swapping on Mobile Systems

 mobile operating-system designers avoid swapping. Why?


 Limited memory (Flash memory).
 limited number of writes that flash memory can tolerate before it
becomes unreliable.

 when free memory falls below a certain threshold:


 Apple’s iOS asks applications to voluntarily relinquish allocated
memory.
 Read-only data (such as code) are removed from the system and later
reloaded from flash memory if necessary.
 Data that have been modified (such as the stack) are never removed.
However, any applications that fail to free up sufficient memory may be
terminated by the operating system.

15
Swapping on Mobile Systems

 mobile operating-system designers avoid swapping. Why?


 Limited memory (Flash memory).
 limited number of writes that flash memory can tolerate before it
becomes unreliable.

 when free memory falls below a certain threshold:


 Android does not support swapping and adopts a strategy similar to that
used by iOS.
 It may terminate a process if insufficient free memory is available.
 However, before terminating a process, Android writes its application
state to flash memory so that it can be quickly restarted.

16
Contiguous Allocation
 In contiguous allocation, each process is contained in a single
contiguous section of memory.
 fixed-sized partitions vs. variable partitions.

17
Contiguous Allocation
 Memory Allocation Algorithms:-
1. First Fit: Allocate the first hole that is big enough.
2. Next Fit: Allocate the first hole that is big enough (like first fit);
however, the searching process starts at the location where the
previous searching ended.
3. Best-fit: Allocate the smallest hole that is big enough; must search
entire list, unless ordered by size. Produces the smallest leftover
hole.
4. Worst-fit: Allocate the largest hole; must also search entire list.
Produces the largest leftover hole.

18
Contiguous Allocation
• Example:- Consider a swapping system in which the memory
contains the holes of the following sizes in order: 50KB, 12KB,
20KB, 40KB, 32 KB, 9KB, 22KB, and 45KB. Which holes are
allocated for successive segment requests of:
(a) 20KB (b) 25KB (c) 7KB
using first fit, best fit, worst fit, and next fit.
(Hint: if the segment request is smaller than the allocated hole,
ignore the remaining hole)

First Fit Best Fit Worst Fit Next Fit

(a) 20 KB 50 20 50 50
(b) 25 KB 40 32 45 40
(c) 7 KB 12 9 40 32
19
Contiguous Allocation
• Example:- Consider a swapping system in which the memory
contains the holes of the following sizes in order: 10KB, 4KB,
20KB, 18KB, 7KB, 9KB, 12KB, and 15KB. Which holes are allocated
for successive segment requests of:
12KB (b) 10KB (c) 9KB
using first fit, best fit, worst fit, and next fit.

First Fit Best Fit Worst Fit Next Fit

12KB 20 8 12 20 20

10KB 10 10 18 18

9KB 18 9 15 9

20
Contiguous Allocation
• Example:- Assume that we have the following jobs to be scheduled
using FCFS algorithm with variable partition memory scheme:
Job Size CPU
J1 50 KB 13
J2 120 KB 3
J3 70 KB 9
J4 20 KB 3
J5 105 KB 8
J6 55 KB 13

Memory size is 256KB; (50KB, 120KB, 70 KB, and 16KB)


Show how the memory will look after deallocating job 6 (assume no
memory compaction is allowed) using both FCFS and Firs-fit allocation
technique.
21
Contiguous Allocation
Size List of Jobs Size List of Jobs Size List of Jobs
50 KB J1 13 50 KB J1 10 50 KB J1 7
120 KB J2 3 120 KB J4 3 120 KB J5 8
70 KB J3 9 70 KB J3 6 70 KB J3 3
16 KB 16 KB 16 KB

Size List of Jobs Size List of Jobs Size List of Jobs


50 KB 50 KB 50 KB J1 4
120 KB 120 KB J5 1 120 KB J5 5
70 KB J6 8 70 KB J6 9 70 KB J6 13
16 KB 16 KB 16 KB

Size List of Jobs


50 KB
120 KB
70 KB
22
16 KB
Contiguous Allocation
• Example:- Assume that we have the following jobs to be scheduled
using FCFS algorithm with variable partition memory scheme with
memory size = 256KB:

Job Arrival Time Size CPU


J1 0 120 KB 8
J2 2 70 KB 4
J3 2 50 KB 12
J4 2 100 KB 12
J5 0 40 KB 4
J6 2 120 KB 4
J7 0 60KB 8
J8 0 30KB 8

Show how the memory will look after loading job 6 (assume no memory
compaction is allowed) using both FCFS and Best-Fit allocation technique.
23
Contiguous Allocation

Size List of Jobs Size List of Jobs Size List of Jobs


120 KB J1 120 KB J1 120 KB J2
40 KB J5 40 KB 40 KB
60 KB J7 60 KB J7 60 KB J3
30 KB J8 30 KB J8 30 KB
6KB 6KB 6KB

Size List of Jobs Size List of Jobs Size List of Jobs


120 KB J6 120 KB J4 120 KB J4
40 KB 40 KB 40 KB
60 KB 60 KB 60 KB J3
30 KB 30 KB 30 KB
6KB 6KB 6KB

24
Fragmentation
Fragmentation is a phenomenon in which storage space is used
inefficiently, reducing capacity and often performance.
Fragmentation leads to storage space being "wasted“.
Types of fragmentation: External – Internal.

 External fragmentation exists when there is enough total


memory space to satisfy a request but the available spaces
are not contiguous: storage is fragmented into a large number
of small holes.
 Internal fragmentation exist allocated memory may be
slightly larger than requested memory.
25
Fragmentation
Solutions for external fragmentation:

1. Compaction: in which the memory contents are shuffled so


as to place all free memory together in one large block.

2. Another possible solution to the external-fragmentation


problem is to permit the logical address space of the
processes to be noncontiguous. (Paging - Segmentation).

26
Computer Science Department
Faculty of Computers and Information Sciences
Mansoura University

MEMORY MANAGEMENT
(Part 2)
Presented By:
Dr. Abdelaziz Said
1
Agenda
Paging.
Segmentation.
Paging with Segmentation.

2
Paging
• Paging is a memory management scheme that permits the
physical address space of a process to be noncontiguous.
• Divide logical memory into blocks of same size called pages.

• Divide physical memory into fixed-sized blocks called frames

• Page size = frame size (size is power of 2).

• Keep track of all free frames.

• To run a program of size n pages, need to find n free frames and


load program.

3
Paging
• Page table: translate logical to physical addresses.
• Advantage: No External fragmentation
• Disadvantage: Internal fragmentation.

4
Paging

5
Paging
Address Translation Scheme
• Logical address : Address generated by CPU is divided into:
– Page number (p): used as an index into a page table which
contains base address of each page in physical memory.
– Page offset (d): combined with base address to define the
physical memory address that is sent to the memory unit.

• physical address: Address seen by memory is divided into:


– Frame number (f): obtained from page table
– Frame offset (d): combined with base address to define the
physical memory address that is sent to the memory unit.

6
Paging
Address Translation Scheme

7
Paging
Address Translation Scheme
• Size of logical address space is 2m (No of bits for logical address = m = p+d)
– No of pages 2k  p = k bits or p = m-n bits
– page size is 2n  d = n bits
• No of bits in physical address  f + d
– No of frames 2k  f = k bits
– frame size is 2n  d = n bits
Example: Consider a logical address space of eight pages of 1024 words each,
mapped onto a physical memory of 32 frames.
• How many bits are there in the logical address?
• How many bits are there in the physical address?
Solution
Logical address = 3 bit page number + 10 bit offset = 13 bit
Physical address = 5 bit frame number + 10 bit offset = 15 bit 8
Paging
Translate Logical address to physical address
• Known
 logical address
 page size

• Solution
– logical address
• p = int (logical address / page size)
• d = mod (logical address / page size)
– physical address
• from page table f =
• d=
physical address = (f x page size) + d
9
Paging
Translate Logical address to physical address
• Example : Find physical address for the logical address char f at 5
• Solution:
Logical address
p = int (logical address / page size) = int (5/4) =1
d = mod (logical address / page size) = mod (5/4) = 1
physical address
from page table f = 6
d=1

physical address = (f x page size) + d


= (6 x 4) + 1 = 25

10
Paging
Translate physical address to Logical address
• Known
 Physical address
 frame size
• Solution
– physical address
• f = int (physical address / frame size)
• d = mod (physical address / frame size)
– logical address
• from page table p =
• d=

logical address = (p x page size) + d


11
Paging
Translate physical address to Logical address
• Example : Find logical address for the physical address char o at 10
• Solution
physical address
• f = int (physical address / frame size)= int (10/4) =2
• d = mod (physical address / frame size) = mod (10/4) = 2
logical address
• from page table p = 3
• d=2

physical address = (p x page size) + d


= (3 x 4) + 2 = 14

12
Paging
Free Frames
• Frame table: a data structure
used by the operating system
to keep track of allocated and
free frames.
• Each process has its own paging
table.
• When context switch occur the
dispatcher loads the paging
table of the process into
hardware page table.
• Paging increase context-switch
time.

13
Paging
Internal fragmentation problem in paging
• Paging approach suffer from internal fragmentation. The last
frame allocated may not be completely full.
• In the worst case, a process would need n pages plus one byte.
It would be allocated n + 1 frames resulting in internal
fragmentation of almost an entire frame.
• Small page sizes are desirable.
• However, Smaller page size  increase No of pages  increase
page table size .

14
Paging
Internal fragmentation problem in paging

Example: If a process is 3.5 KB , frame size = 2 KB ,, compute internal fragmentation


Solution: No of frames = 2 frames ,, internal fragmentation = 0.5

Example: If a process is 209 KB,, compute internal fragmentation for frame size =
0.5, 1, 2, 4, 8 KB
Solution:

Frame size 0.5 1 2 4 8


No of frames 418 209 105 53 27
Internal fragmentation 0 0 1 3 7

15
Paging
Implementation of Page Table
• Page table is kept in main memory.
 Page-table base register (PTBR) points to the page table.
 Thus, changing page tables requires changing only PTBR register, and
hence context switch time is reduced.

 Page-table length register (PRLR) indicates size of the page table.


 This value is checked against every logical address to verify that the
address in the valid range for the process.

• Every data/instruction access requires two memory accesses, one for the page
table and one for the data/instruction.
• The two memory access problem can be solved by the use of a special fast-lookup
hardware cache called associative memory or translation look-aside buffers
(TLBs).
16
Paging
Associative Memory or TLB

17
Paging
Associative Memory or TLB
• TLB consists of two parts key and value. When it is presented with
an item, it is compared with all keys simultaneously. If the item is
found, the corresponding value field is returned.
• TLB contains only a few of page table entries.
• When a logical address is generated, if the page number is found
in TLB the corresponding frame number is immediately available.
• If page number not found in TLB (TLB miss), a memory reference
must be made. In addition, we add page number and frame
number to TLB, so they will be found quickly on the next
reference.

18
Paging
Associative Memory or TLB

Effective Access Time (EAT) = H * (TLB access time+ MA) + (1 − H)*


(TLB access + PT access + MA).

Example: Suppose the paging hardware with TLB has a 90 percent


hit ratio. Page numbers found in the TLB have a total access time of
100 nanoseconds. Those which are not found there have a total
access time of 200 nanoseconds. What is the effective access time?
a. 100 nanoseconds b. 110 nanoseconds
c. 190 nanoseconds d. 200 nanoseconds

19
Paging
Memory Protection
• Memory protection in paging is
implemented by associating
protection bit with each frame.
• Valid-invalid: bit attached to
each entry in the page table:
– Valid: indicates that the
associated page is in the
process’ logical address
space, and is thus a legal
page.
– Invalid: indicates that the
page is not in the process’
logical address space.
20
Segmentation
• Segmentation is a memory-
management scheme that
supports user view of memory.
• A program is a collection of
segments. A segment is a logical
unit such as: main program,
procedure, function, method,
object, local variables, global
variables, common block, stack,
symbol table, arrays, etc.

21
Segmentation

22
Segmentation
Segmentation Architecture
• Logical address consists of: <segment-number, offset>,
• Segment table: maps two-dimensional physical addresses; each
table entry has:
– Base: contains the starting physical address of segments in
memory.
– Limit: specifies the length of the segment.
• Segment-table base register (STBR) points to the segment table’s
location in memory.
• Segment-table length register (STLR) indicates number of
segments used by a program; segment number is legal if < STLR.

23
Segmentation

24
Segmentation
Advantages:
– Segment sharing
– Easier to relocate segment than entire program
– Avoids allocating unused memory
– Flexible protection
– Efficient translation
– Segment table small
Disadvantages:
– Segments have variable lengths  how to fit?
– Segments can be large  fragmentation

25
Segmentation
• Pages Vs. Segments:
– Pages:
 Fixed units of a computer memory allocated by the
computer for storing and processing information.
 Easy to place in memory (fixed size).

– Segments:
 Variable units of a computer memory.
 Because of different size of segments memory allocation
take more processing.

26
Segmentation

27
Paging with Segmentation
• Paging with Segmentation is a memory-management scheme that
attempts to take the advantages of both paging and
segmentation memory-management schemes.
• The segment-table entry contains not the base address of the
segment, but rather the base address of a page table for this
segment.

28
Computer Science Department
Faculty of Computers and Information Sciences
Mansoura University

VIRTUAL MEMORY

Presented By:
Dr. Abdelaziz Said
1
Agenda
Paging.
Demand Paging.
Page Fault.
Page Replacement.
Page Replacement Algorithms.

2
Virtual Memory
• Virtual memory allow the execution of processes that may
not be completely in memory.
• Virtual memory: separation of user logical memory from physical
memory.
• Benefits
– Only part of the program needs to be in memory for execution.
– Logical address space can therefore be much larger than physical address
space.
– Allows address spaces to be shared by several processes.
– Allows for more efficient process creation.
– Programming much easier ( free programmers from concern of memory-
storage limitations )
• Virtual memory can be implemented via: Demand Paging
3
Virtual Memory

4
Demand Paging
• A demand-paging system is similar to a paging system with swapping.
• Demand paging Brings a page into memory only when it is needed.
• Advantages:
– Less I/O needed.
– Less memory needed.
– Faster response.
– More users.

5
Demand Paging
• When a process is executed, demand paging uses lazy swapper
(pager) that swap only pages that will be needed into memory.

 Swapper: swap entire processes.


 Pager: swap individual pages of a process.

6
Demand Paging
• How to distinguish between pages in memory & pages on the
disk?

7
Demand Paging
• How to distinguish between pages in memory & pages on the
disk?
– Use valid-invalid bit in page table
– When bit is set to valid, page is legal & in memory
– When bit is set to invalid.
• Page not valid ( not exist in logical space of program ).
• Page is valid but it is not in memory (bring it to memory).

• When page is needed, reference to it in internal table.


– If invalid reference then abort.
– If not-in-memory then bring it to memory.
8
Page Fault
• During address translation, if valid–invalid bit in page table entry
is 0  page fault.
• Page Fault is interrupt that arises upon a reference to a page that
is not in main memory.

9
Page Fault
• Steps for handling page fault trap
1. First reference to a page always causes a page fault trap to OS.
2. OS looks at the internal table of the process (kept within PCB):
 If invalid reference, then abort the process.
 If the reference is valid, then the page is just not in memory.
3. Get empty frame.
4. Swap page into frame.
5. Update the internal table and the page table (validation bit = 1).
6. Restart instruction.

10
Page Replacement
• What happens if there is no free frame?
 Page replacement: find some page in memory, but not really
in use, swap it out.

 Replacement algorithm should result in minimum number of


page faults.

 Same page may be brought into memory several times.

11
Page Replacement

12
Page Replacement
Basic Page Replacement
• Find the location of the desired page on disk.
• Find a free frame:
– If there is a free frame, use it.
– If no free frame, use a page replacement algorithm to select
a victim frame.
• Read the desired page into the (newly) free frame. Update the
page and frame tables.
• Restart the process.

13
Page Replacement Algorithms
1- First-In-First-Out (FIFO).
2- Optimal Page Replacement.
3- Least Recently Used (LRU).
4- LRU-Approximation Page.
5- Counting-Based Page.

14
Page Replacement Algorithms
First-In-First-Out (FIFO):
• When pages must be replaced, the oldest page is
chosen.
• We create a FIFO queue to hold all pages in memory.
• We replace the page at the head of the queue.
• When page is brought into memory, we insert it at the
tail of queue.
• Advantages:
 Low-overhead implementation
• Disadvantages:
 May replace heavily used pages
15
Page Replacement Algorithms
First-In-First-Out (FIFO):
• Given the reference string

• How many page faults would occur, assuming 3 frames?

Total page fault occurs: 15


16
Page Replacement Algorithms
Optimal Page Replacement:
• Replace the page that will not be used for the longest period of
time.
• The algorithm that has the lowest page-fault rate of all
algorithms.

• Advantages:
 Optimal solution and can be used as an off-line analysis method.
• Disadvantages:
 No online implementation.

• Optimal Vs. FIFO


 FIFO algorithm uses the time when a page was brought into
memory.
 Optimal algorithm uses the time when a page is to17be used.
Page Replacement Algorithms
Optimal Page Replacement:

Total 9 page faults

18
Page Replacement Algorithms
Least Recently Used (LRU):
• Pages that have been heavily used in the last few instructions will
probably be heavily used again in the next few.
• Throw out the page that has been unused for a long period of
time.
• LRU can be implemented by Counters and stacks.

• Disadvantages:
 Expensive  maintain list of pages by references and update
on every reference

19
Page Replacement Algorithms
Least Recently Used (LRU):

Page faults: 12

20
Page Replacement Algorithms
LRU-Approximation Page Replacement:
• Reference bits are associated with each entry in the page table.
• When page is referenced  bit set to 1.
• Replace the one which is 0 (if one exists).
• We do not know the order of use.

• There are 3 algorithms:


1. Additional-Reference-Bits Algorithm.
2. Second-Chance Algorithm (Clock Algorithm).
3. Enhanced Second-Chance Algorithm.

21
Page Replacement Algorithms
Additional-Reference-Bits Algorithm:
• We can gain additional ordering information by recording the reference
bits at regular intervals.
• We can keep a 8-bit for each page in a table in memory.
• Ex:
 A page with a history value of 11000100 has been used more
recently than one with a value of 01110111.
 If reference bits contain 00000000  the page has not been used
for eight time periods.
 A page that is used at least once in each period  value of 11111111.

• Problem
 The numbers are not guaranteed to be unique.
• Solution:
 Swap out all pages with the smallest value or use the FIFO .
22
Page Replacement Algorithms
Second-Chance Algorithm (Clock algorithm):
• Based on FIFO replacement algorithm.
• When a page selected, we check its reference bit.
 If the value is 0 we replace this page
 If the reference bit is set to 1: We give the page a second chance
and move on to select the next FIFO page.
• When a page gets a second chance, its reference bit is cleared, and its
arrival time is reset to the current time.
• Can implement second-chance algorithm using a circular queue.

23
Page Replacement Algorithms
Second-Chance Algorithm (Clock algorithm):

24
Page Replacement Algorithms
Second-Chance Algorithm (Clock algorithm):

25
Page Replacement Algorithms
Enhanced Second-Chance Algorithm:
• We can enhance the second-chance algorithm by considering the
reference bit and the modify bit.

1. (0, 0) neither recently used nor modified—best page to


replace
2. (0, 1) not recently used but modified—not quite as good,
because the page will need to be written out before
replacement
3. (1, 0) recently used but clean—probably will be used
again soon
4. (1, 1)recently used and modified—probably will be used
again soon
26
Page Replacement Algorithms
Counting-Based Page Replacement:
• We can keep a counter of the number of references that have
been made to each page and develop the following two schemes.

1. The least frequently used (LFU):


 Requires that the page with the smallest count be replaced.
 The reason for this selection is that an actively used page should
have a large reference count.
• Problem:
 When a page is used heavily during the initial phase of a process,
but then is never used again (large count).
• Solution:
 Shift the counts right by 1 bit at regular intervals, forming an
exponentially decaying average usage count.
27
Page Replacement Algorithms
Counting-Based Page Replacement:
• We can keep a counter of the number of references that have
been made to each page and develop the following two schemes.

2. The Most frequently used (MFU):


 The most frequently used (MFU) page-replacement algorithm
requires that the page with the largest count be replaced.
 Based on the argument that the page with the smallest count
was probably just brought in and has yet to be used.

• The implementation of these algorithms is expensive, and they


do not approximate optimal replacement well.
28
Computer Science Department
Faculty of Computers and Information Sciences
Mansoura University

STORAGE MANAGEMENT

Presented By:
Dr. Abdelaziz Said
1
Agenda
Introduction.
Mass-Storage Structure.
Disk Scheduling.

2
Introduction
• We have three essential requirements for long-term information
storage:
 It must be possible to store a very large amount of
information.
 The information must survive regardless the termination of
the process using it.
 Multiple processes must be able to access the information
concurrently.

3
Introduction
• The usual solution to all these problems is to store information
on disks and other external media in units called files.

• Files are managed by the operating system.

• How they are structured, named, accessed, used, protected, and


implemented are major topics in operating system design.

• As a whole, that part of the operating system dealing with files is


known as the file system.

4
Mass-storage Structure

Magnetic Disks:
• Magnetic disks provide the bulk of secondary storage for modern
computer systems.

5
Mass-storage Structure

Magnetic Disks:
• Disk speed has two parts:
1. The transfer rate is the rate at which data flow between the
drive and the computer.
2. The positioning time, or random-access time, consists of two
parts:
 seek time: the time necessary to move the disk arm to the
desired cylinder, called the seek time.
 rotational latency: the time necessary for the desired sector
to rotate to the disk head, called.
• Bandwidth  The total number of bytes transferred divided by
the total time between the first request for service and the
completion of the last transfer.
6
Mass-storage Structure

Magnetic Disks:

• head crash : occurs when make contact with the disk surface
causing damage to the magnetic surface.
 A head crash normally cannot be repaired; the entire disk
must be replaced.

• A disk drive is attached to a computer by a set of wires called an


I/O bus.
 advanced technology attachment (ATA), serial ATA (SATA),
eSATA, universal serial bus (USB), and fiber channel (FC).

7
Mass-storage Structure

Solid-State Disks (SSDs):


• an SSD is nonvolatile memory that is used like a hard drive.
• many variations :
 from DRAM with a battery to allow it to maintain its state in a
power failure through flash-memory technologies like single-level
cell (SLC) and multilevel cell (MLC) chips.
• SSD Vs. Hard Disk:
 SSD is more reliable than hard disk because they have no moving
parts and
 SSD is faster than hard disk because they have no seek time or
latency.
 SSD consumes less power.
 SSD more expensive per megabyte than traditional hard disks.
 SSD have less capacity than the larger hard disks.
 SSD may have shorter life spans than hard disks.
8
Mass-storage Structure

Magnetic Tapes:
• A tape is kept in a spool and is wound or rewound past a read–
write head.
• Magnetic tape was used as an early secondary-storage medium.
 Its access time is slow compared with that of main memory and
magnetic disk.
 Its random access is about a thousand times slower than random
access to magnetic disk.

• Tapes are used mainly for backup, for storage of infrequently


used information, and as a medium for transferring information
from one system to another.

9
Disk Scheduling

• We can improve both the access time and the bandwidth by


managing the order in which disk I/O requests are serviced.

• There are two objectives for any disk scheduling algorithm:


1. Maximize the throughput:
The average number of requests satisfied per time unit.
2. Minimize the response time:
The average time that a request must wait before it is
satisfied.

10
Disk Scheduling

• Disk Scheduling Algorithms:

1. First Come First Served (FCFS).


2. Shortest Seek Time First (SSTF).
3. SCAN (Elevator).
4. Circular SCAN (C-SCAN).
5. LOOK.
6. C-LOOK.

11
Disk Scheduling

First Come First Served (FCFS):


• Fairness among requests (simple).
• Improves the response time.
• No starvation.
• Disadvantage:
 Arrival may be on random spots on the disk (long seeks), Wild
swings can happen (poor average service time).
 Throughput is not efficient.

12
Disk Scheduling
First Come First Served (FCFS):
• Example: Given the following queue 95, 180, 34, 119, 11, 123, 62, 64
with read write head initially at 50 and the tail being at 199.

Total head movement: 640


13
Disk Scheduling

Shortest Seek Time First (SSTF):


• Processes the request from the queue by choosing the request
that is the closest to the current head.
• Reduced the seek time compared to FCFS.
• Disadvantage: Starvation is possible.

14
Disk Scheduling

Shortest Seek Time First (SSTF):


• Processes the request from the queue by choosing the request
that is the closest to the current head.
• Reduced the seek time compared to FCFS.
• Disadvantage: Starvation is possible.
• Example: Given the following queue 95, 180, 34, 119, 11, 123, 62, 64
with read write head initially at 50 and the tail being at 199.

Total head movement:


12+2+30+23+84+24+4+57 = 236

15
Disk Scheduling

Shortest Seek Time First (SSTF):


• Processes the request from the queue by choosing the request
that is the closest to the current head.
• Reduced the seek time compared to FCFS.
• Disadvantage: Starvation is possible.
• Example: Given the following queue 95, 180, 34, 119, 11, 123, 62, 64
with read write head initially at 50 and the tail being at 199.

Total head movement:


12+2+30+23+84+24+4+57 = 236

16
Disk Scheduling

SCAN (Elevator):
• Scan towards the nearest end and then when it hits the end it
scans up servicing the requests that it didn’t get going down
• Preventing starvation.
• Bounded time for each request(short service times)
• Disadvantage: Request at the other end will take a while.
• Example: Given the following queue 95, 180, 34, 119, 11, 123, 62, 64
with read write head initially at 50 and the tail being at 199.

Total head movement:


16+23+11+62+2+31+24+4+57 =
230
17
Disk Scheduling

Look:
• The arm goes only as far as the final request in a direction then it
reverses direction to the end of the disk.
• Like SCAN but stops moving inwards when no more requests in
that direction.

18
Disk Scheduling
C-LOOK:
• Enhanced version of C-SCAN
• Scanning doesn’t go after the last request in the direction .
• Moves servicing requests until there are no more requests in that
direction, and then it jumps to the outermost outstanding requests.
• Example: Given the following queue 95, 180, 34, 119, 11, 123, 62, 64 with
read write head initially at 50 and the tail being at 199.

Total head movement: 16+23+169+57+4+24+31+2= 326 tracks.


19
Disk Scheduling

Selecting a Disk-Scheduling Algorithm:


• SCAN and C-SCAN perform better for systems that place a heavy load
on the disk.

• Performance depends on the number and types of requests.

• The disk-scheduling algorithm should be written as a separate module


of the operating system, allowing it to be replaced with a different
algorithm if necessary.

• Either SSTF or LOOK is a reasonable choice for the default algorithm.

20
RAID Structure

RAID (Redundant Array of Independent Disks):


• It is a way of storing the same data in different places
(redundant) on multiple hard disks so operations can overlap in
a balanced way for improving the performance.

• Increase fault tolerance.

21
File Concept

• The file system consists of two distinct parts:


– A collection of files, each storing related data.
– A directory structure, which organizes and provides information
about all the files in the system.

• A file is a named collection of related information that is recorded on


secondary storage.
• From a user’s perspective, a file is the smallest allotment of logical
secondary storage;
• that is, data cannot be written to secondary storage unless they are
within a file.

22
File Concept

File Attributes:
• A file’s attributes vary from one operating system to another but
typically consist of these: Name – Identifier – Type – Location – Size –
Protection, Time, date, and user identification.
• Information about files is kept in the directory structure, which is
maintained on the disk.

File Operations:
• To define a file properly, we need to consider the operations that can
be performed on files.
• The operating system can provide system calls to open, close, create,
write, read, reposition, delete, and truncate files.
23
File Concept

File Types:
• an operating system recognizes
the type of a file, it can then
operate on the file in
reasonable ways.

24
Access Methods

• The information in the file can be accessed in several ways:


1. Sequential Access.
2. Direct Access.

Sequential Access.
• The simplest access method is sequential access. Information in the file
is processed in order, one record after the other.
• This mode of access is by far the most common;
• for example, editors and compilers usually access files in this fashion.
• Sequential access is based on a tape model of a file .

25
Access Methods

• The information in the file can be accessed in several ways:


1. Sequential Access.
2. Direct Access.

Direct Access.
• direct access or relative access.
• Here, a file is made up of fixed-length logical records that allow
programs to read and write records rapidly in no particular order.
• The direct-access method is based on a disk model of a file

26
File-system Implementation

• How files and directories are stored, how disk space is managed, and
how to make everything work efficiently and reliably

• Most disks can be divided up into one or more partitions, with


independent file systems on each partition.

• Sector 0 of the disk is called the MBR (Master Boot Record) and is
used to boot the computer.

• The end of the MBR contains the partition table.

• When the computer is booted, the BIOS read in and execute the
MBR.
27
File-system Implementation

28
File Allocation Methods

• Goals of allocating space to files:


 Effective utilization of disk space.
 Fast accesses to files.

• File Allocation Methods: contiguous, linked, and indexed.

29
File Allocation Methods

Contiguous Allocation:

30
File Allocation Methods

Contiguous Allocation:
• Requires that each file occupy a set of contiguous blocks on the disk.
• Accessing block b+1 after block b normally requires no head
movement.
• When head movement is needed, the head need only move from one
track to the next.
• The directory entry for each file indicates the address of the starting
block and the length of the area allocated for this file.

31
File Allocation Methods

Contiguous Allocation:
• Advantages:
– Number of disk seek is minimal.
– Simple, require start block & length.
– Support both sequential access (read next block) & direct access (start
block + i).

• Disadvantages:
– Finding space for new file (dynamic storage allocation).
– External fragmentation: free space broken into chunks, all is small to
allocate file.
Sol: compacting all free space into one contiguous space.
– Determining how much space needed for file.
– File cannot grow. 32
File Allocation Methods

Linked list Allocation:

33
File Allocation Methods

Linked List Allocation Using File Allocation Table (FAT):

34
File Allocation Methods

Linked List Allocation Using File Allocation Table (FAT):


• Disadvantages of the linked list can be solved by taking the pointer
from each disk block and putting it in a table in memory (FAT (File
Allocation Table)).
• A benefit is that random-access time is improved, because the disk
head can find the location of any block by reading the information in
the FAT.

• Disadvantages:
– The entire table must be in memory all the time to make it work.
– FAT idea does not scale well to large disks.

35
File Allocation Methods

Indexed Allocation:

36
File Allocation Methods

Indexed Allocation:

• Bring all the pointers together into one location called the index block.
• Each file has its own index block.
• The directory contains the address of the index block of a file.

• Disadvantages:
– Supports direct access.
– Does not suffering from external fragmentation.

• Disadvantages:
– The index block will occupy some space thus considered as an
overhead of the method.
37
Free Space Management

• Disk space is limited; we need to reuse space from deleted files


for new files.

• Free Space List: record all free disk blocks.

• Create file require search the list for the required amount of
space, this space removed from the list.

• Remove file add the space to the free space list.

38
Free Space Management
• Implementation of Free Space List:
1. Bit Vector
• 1 bit represents one block.
• Free block  bit = 1, allocated block  bit=0.
• EX: disk where only blocks 2, 3, 4, 5, 8, 9, 10, 11, 12, 13, 17, 18, 25,
26, and 27 are free .
Bit vector is: 001111001111110001100000011100000 ...etc.
• Advantages:
– Simplicity
– Efficiency
– Easy to get contiguous space
• Disadvantage
– Bit vector requires extra space
39
Free Space Management
• Implementation of Free Space List:
2. Linked list
• Link all the free blocks together.
• A pointer to first free block is cashed in memory.
• Each free block contains a pointer to next free block.
• Advantage: No waste of space.
• Disadvantage: Cannot get contiguous space easily.

3. Grouping
• Store addresses of n free blocks in the first free block.
• The first n-1 of these blocks are free.
• The last block contains the addresses of other n free blocks.
• Advantage: found large number of free blocks quickly.
40
Free Space Management

• Implementation of Free Space List:


2. Counting
• keep the address of the first free block and the number (n) of
free contiguous blocks that follow the first block.

• Each entry in the free-space list then consists of a disk address


and a count.

41

You might also like