Lect Part 2
Lect Part 2
MEMORY MANAGEMENT
(Part 1)
Presented By:
Dr. Abdelaziz Said
1
Agenda
Introduction.
Logical vs. Physical Address Space.
Swapping.
Contiguous Allocation.
2
Introduction
Storage Hierarchy (Cost – Speed - Capacity).
3
Introduction
Program must be brought to memory & placed within a process for it to
be run.
The process may be moved between disk and memory during its
execution.
5
Memory Management
The task of Memory subdivision is carried out dynamically
by the operating system and is known as memory
management.
The part of the operating system that manages (part of) the
memory hierarchy is called the memory manager.
keep track of which parts of memory are in use.
6
Memory Management
A simple solution to both the relocation and protection
problems: two special hardware registers, called the base
and limit registers.
8
Memory Management
Address binding is the mapping from one address space to
another.
1. Compile time: If you know at compile time where the process will
reside in memory, then absolute code can be generated.
2. Load time: If it is not known at compile time where the process will
reside in memory, then the compiler must generate relocatable code.
In this case, final binding is delayed until load time.
3. Execution time: Binding delayed until run time if the process can be
moved during its execution from one memory segment to another.
Hardware support is needed for address mapping (e.g., base and
limit registers).
9
Logical Vs. Physical Address Space
11
Memory- Management Unit (MMU)
Advantages:
OS can easily move a process during execution.
Disadvantages:
Slows down hardware due to the add on every memory reference.
13
Swapping
14
Swapping on Mobile Systems
15
Swapping on Mobile Systems
16
Contiguous Allocation
In contiguous allocation, each process is contained in a single
contiguous section of memory.
fixed-sized partitions vs. variable partitions.
17
Contiguous Allocation
Memory Allocation Algorithms:-
1. First Fit: Allocate the first hole that is big enough.
2. Next Fit: Allocate the first hole that is big enough (like first fit);
however, the searching process starts at the location where the
previous searching ended.
3. Best-fit: Allocate the smallest hole that is big enough; must search
entire list, unless ordered by size. Produces the smallest leftover
hole.
4. Worst-fit: Allocate the largest hole; must also search entire list.
Produces the largest leftover hole.
18
Contiguous Allocation
• Example:- Consider a swapping system in which the memory
contains the holes of the following sizes in order: 50KB, 12KB,
20KB, 40KB, 32 KB, 9KB, 22KB, and 45KB. Which holes are
allocated for successive segment requests of:
(a) 20KB (b) 25KB (c) 7KB
using first fit, best fit, worst fit, and next fit.
(Hint: if the segment request is smaller than the allocated hole,
ignore the remaining hole)
(a) 20 KB 50 20 50 50
(b) 25 KB 40 32 45 40
(c) 7 KB 12 9 40 32
19
Contiguous Allocation
• Example:- Consider a swapping system in which the memory
contains the holes of the following sizes in order: 10KB, 4KB,
20KB, 18KB, 7KB, 9KB, 12KB, and 15KB. Which holes are allocated
for successive segment requests of:
12KB (b) 10KB (c) 9KB
using first fit, best fit, worst fit, and next fit.
12KB 20 8 12 20 20
10KB 10 10 18 18
9KB 18 9 15 9
20
Contiguous Allocation
• Example:- Assume that we have the following jobs to be scheduled
using FCFS algorithm with variable partition memory scheme:
Job Size CPU
J1 50 KB 13
J2 120 KB 3
J3 70 KB 9
J4 20 KB 3
J5 105 KB 8
J6 55 KB 13
Show how the memory will look after loading job 6 (assume no memory
compaction is allowed) using both FCFS and Best-Fit allocation technique.
23
Contiguous Allocation
24
Fragmentation
Fragmentation is a phenomenon in which storage space is used
inefficiently, reducing capacity and often performance.
Fragmentation leads to storage space being "wasted“.
Types of fragmentation: External – Internal.
26
Computer Science Department
Faculty of Computers and Information Sciences
Mansoura University
MEMORY MANAGEMENT
(Part 2)
Presented By:
Dr. Abdelaziz Said
1
Agenda
Paging.
Segmentation.
Paging with Segmentation.
2
Paging
• Paging is a memory management scheme that permits the
physical address space of a process to be noncontiguous.
• Divide logical memory into blocks of same size called pages.
3
Paging
• Page table: translate logical to physical addresses.
• Advantage: No External fragmentation
• Disadvantage: Internal fragmentation.
4
Paging
5
Paging
Address Translation Scheme
• Logical address : Address generated by CPU is divided into:
– Page number (p): used as an index into a page table which
contains base address of each page in physical memory.
– Page offset (d): combined with base address to define the
physical memory address that is sent to the memory unit.
6
Paging
Address Translation Scheme
7
Paging
Address Translation Scheme
• Size of logical address space is 2m (No of bits for logical address = m = p+d)
– No of pages 2k p = k bits or p = m-n bits
– page size is 2n d = n bits
• No of bits in physical address f + d
– No of frames 2k f = k bits
– frame size is 2n d = n bits
Example: Consider a logical address space of eight pages of 1024 words each,
mapped onto a physical memory of 32 frames.
• How many bits are there in the logical address?
• How many bits are there in the physical address?
Solution
Logical address = 3 bit page number + 10 bit offset = 13 bit
Physical address = 5 bit frame number + 10 bit offset = 15 bit 8
Paging
Translate Logical address to physical address
• Known
logical address
page size
• Solution
– logical address
• p = int (logical address / page size)
• d = mod (logical address / page size)
– physical address
• from page table f =
• d=
physical address = (f x page size) + d
9
Paging
Translate Logical address to physical address
• Example : Find physical address for the logical address char f at 5
• Solution:
Logical address
p = int (logical address / page size) = int (5/4) =1
d = mod (logical address / page size) = mod (5/4) = 1
physical address
from page table f = 6
d=1
10
Paging
Translate physical address to Logical address
• Known
Physical address
frame size
• Solution
– physical address
• f = int (physical address / frame size)
• d = mod (physical address / frame size)
– logical address
• from page table p =
• d=
12
Paging
Free Frames
• Frame table: a data structure
used by the operating system
to keep track of allocated and
free frames.
• Each process has its own paging
table.
• When context switch occur the
dispatcher loads the paging
table of the process into
hardware page table.
• Paging increase context-switch
time.
13
Paging
Internal fragmentation problem in paging
• Paging approach suffer from internal fragmentation. The last
frame allocated may not be completely full.
• In the worst case, a process would need n pages plus one byte.
It would be allocated n + 1 frames resulting in internal
fragmentation of almost an entire frame.
• Small page sizes are desirable.
• However, Smaller page size increase No of pages increase
page table size .
14
Paging
Internal fragmentation problem in paging
Example: If a process is 209 KB,, compute internal fragmentation for frame size =
0.5, 1, 2, 4, 8 KB
Solution:
15
Paging
Implementation of Page Table
• Page table is kept in main memory.
Page-table base register (PTBR) points to the page table.
Thus, changing page tables requires changing only PTBR register, and
hence context switch time is reduced.
• Every data/instruction access requires two memory accesses, one for the page
table and one for the data/instruction.
• The two memory access problem can be solved by the use of a special fast-lookup
hardware cache called associative memory or translation look-aside buffers
(TLBs).
16
Paging
Associative Memory or TLB
17
Paging
Associative Memory or TLB
• TLB consists of two parts key and value. When it is presented with
an item, it is compared with all keys simultaneously. If the item is
found, the corresponding value field is returned.
• TLB contains only a few of page table entries.
• When a logical address is generated, if the page number is found
in TLB the corresponding frame number is immediately available.
• If page number not found in TLB (TLB miss), a memory reference
must be made. In addition, we add page number and frame
number to TLB, so they will be found quickly on the next
reference.
18
Paging
Associative Memory or TLB
19
Paging
Memory Protection
• Memory protection in paging is
implemented by associating
protection bit with each frame.
• Valid-invalid: bit attached to
each entry in the page table:
– Valid: indicates that the
associated page is in the
process’ logical address
space, and is thus a legal
page.
– Invalid: indicates that the
page is not in the process’
logical address space.
20
Segmentation
• Segmentation is a memory-
management scheme that
supports user view of memory.
• A program is a collection of
segments. A segment is a logical
unit such as: main program,
procedure, function, method,
object, local variables, global
variables, common block, stack,
symbol table, arrays, etc.
21
Segmentation
22
Segmentation
Segmentation Architecture
• Logical address consists of: <segment-number, offset>,
• Segment table: maps two-dimensional physical addresses; each
table entry has:
– Base: contains the starting physical address of segments in
memory.
– Limit: specifies the length of the segment.
• Segment-table base register (STBR) points to the segment table’s
location in memory.
• Segment-table length register (STLR) indicates number of
segments used by a program; segment number is legal if < STLR.
23
Segmentation
24
Segmentation
Advantages:
– Segment sharing
– Easier to relocate segment than entire program
– Avoids allocating unused memory
– Flexible protection
– Efficient translation
– Segment table small
Disadvantages:
– Segments have variable lengths how to fit?
– Segments can be large fragmentation
25
Segmentation
• Pages Vs. Segments:
– Pages:
Fixed units of a computer memory allocated by the
computer for storing and processing information.
Easy to place in memory (fixed size).
– Segments:
Variable units of a computer memory.
Because of different size of segments memory allocation
take more processing.
26
Segmentation
27
Paging with Segmentation
• Paging with Segmentation is a memory-management scheme that
attempts to take the advantages of both paging and
segmentation memory-management schemes.
• The segment-table entry contains not the base address of the
segment, but rather the base address of a page table for this
segment.
28
Computer Science Department
Faculty of Computers and Information Sciences
Mansoura University
VIRTUAL MEMORY
Presented By:
Dr. Abdelaziz Said
1
Agenda
Paging.
Demand Paging.
Page Fault.
Page Replacement.
Page Replacement Algorithms.
2
Virtual Memory
• Virtual memory allow the execution of processes that may
not be completely in memory.
• Virtual memory: separation of user logical memory from physical
memory.
• Benefits
– Only part of the program needs to be in memory for execution.
– Logical address space can therefore be much larger than physical address
space.
– Allows address spaces to be shared by several processes.
– Allows for more efficient process creation.
– Programming much easier ( free programmers from concern of memory-
storage limitations )
• Virtual memory can be implemented via: Demand Paging
3
Virtual Memory
4
Demand Paging
• A demand-paging system is similar to a paging system with swapping.
• Demand paging Brings a page into memory only when it is needed.
• Advantages:
– Less I/O needed.
– Less memory needed.
– Faster response.
– More users.
5
Demand Paging
• When a process is executed, demand paging uses lazy swapper
(pager) that swap only pages that will be needed into memory.
6
Demand Paging
• How to distinguish between pages in memory & pages on the
disk?
7
Demand Paging
• How to distinguish between pages in memory & pages on the
disk?
– Use valid-invalid bit in page table
– When bit is set to valid, page is legal & in memory
– When bit is set to invalid.
• Page not valid ( not exist in logical space of program ).
• Page is valid but it is not in memory (bring it to memory).
9
Page Fault
• Steps for handling page fault trap
1. First reference to a page always causes a page fault trap to OS.
2. OS looks at the internal table of the process (kept within PCB):
If invalid reference, then abort the process.
If the reference is valid, then the page is just not in memory.
3. Get empty frame.
4. Swap page into frame.
5. Update the internal table and the page table (validation bit = 1).
6. Restart instruction.
10
Page Replacement
• What happens if there is no free frame?
Page replacement: find some page in memory, but not really
in use, swap it out.
11
Page Replacement
12
Page Replacement
Basic Page Replacement
• Find the location of the desired page on disk.
• Find a free frame:
– If there is a free frame, use it.
– If no free frame, use a page replacement algorithm to select
a victim frame.
• Read the desired page into the (newly) free frame. Update the
page and frame tables.
• Restart the process.
13
Page Replacement Algorithms
1- First-In-First-Out (FIFO).
2- Optimal Page Replacement.
3- Least Recently Used (LRU).
4- LRU-Approximation Page.
5- Counting-Based Page.
14
Page Replacement Algorithms
First-In-First-Out (FIFO):
• When pages must be replaced, the oldest page is
chosen.
• We create a FIFO queue to hold all pages in memory.
• We replace the page at the head of the queue.
• When page is brought into memory, we insert it at the
tail of queue.
• Advantages:
Low-overhead implementation
• Disadvantages:
May replace heavily used pages
15
Page Replacement Algorithms
First-In-First-Out (FIFO):
• Given the reference string
• Advantages:
Optimal solution and can be used as an off-line analysis method.
• Disadvantages:
No online implementation.
18
Page Replacement Algorithms
Least Recently Used (LRU):
• Pages that have been heavily used in the last few instructions will
probably be heavily used again in the next few.
• Throw out the page that has been unused for a long period of
time.
• LRU can be implemented by Counters and stacks.
• Disadvantages:
Expensive maintain list of pages by references and update
on every reference
19
Page Replacement Algorithms
Least Recently Used (LRU):
Page faults: 12
20
Page Replacement Algorithms
LRU-Approximation Page Replacement:
• Reference bits are associated with each entry in the page table.
• When page is referenced bit set to 1.
• Replace the one which is 0 (if one exists).
• We do not know the order of use.
21
Page Replacement Algorithms
Additional-Reference-Bits Algorithm:
• We can gain additional ordering information by recording the reference
bits at regular intervals.
• We can keep a 8-bit for each page in a table in memory.
• Ex:
A page with a history value of 11000100 has been used more
recently than one with a value of 01110111.
If reference bits contain 00000000 the page has not been used
for eight time periods.
A page that is used at least once in each period value of 11111111.
• Problem
The numbers are not guaranteed to be unique.
• Solution:
Swap out all pages with the smallest value or use the FIFO .
22
Page Replacement Algorithms
Second-Chance Algorithm (Clock algorithm):
• Based on FIFO replacement algorithm.
• When a page selected, we check its reference bit.
If the value is 0 we replace this page
If the reference bit is set to 1: We give the page a second chance
and move on to select the next FIFO page.
• When a page gets a second chance, its reference bit is cleared, and its
arrival time is reset to the current time.
• Can implement second-chance algorithm using a circular queue.
23
Page Replacement Algorithms
Second-Chance Algorithm (Clock algorithm):
24
Page Replacement Algorithms
Second-Chance Algorithm (Clock algorithm):
25
Page Replacement Algorithms
Enhanced Second-Chance Algorithm:
• We can enhance the second-chance algorithm by considering the
reference bit and the modify bit.
STORAGE MANAGEMENT
Presented By:
Dr. Abdelaziz Said
1
Agenda
Introduction.
Mass-Storage Structure.
Disk Scheduling.
2
Introduction
• We have three essential requirements for long-term information
storage:
It must be possible to store a very large amount of
information.
The information must survive regardless the termination of
the process using it.
Multiple processes must be able to access the information
concurrently.
3
Introduction
• The usual solution to all these problems is to store information
on disks and other external media in units called files.
4
Mass-storage Structure
Magnetic Disks:
• Magnetic disks provide the bulk of secondary storage for modern
computer systems.
5
Mass-storage Structure
Magnetic Disks:
• Disk speed has two parts:
1. The transfer rate is the rate at which data flow between the
drive and the computer.
2. The positioning time, or random-access time, consists of two
parts:
seek time: the time necessary to move the disk arm to the
desired cylinder, called the seek time.
rotational latency: the time necessary for the desired sector
to rotate to the disk head, called.
• Bandwidth The total number of bytes transferred divided by
the total time between the first request for service and the
completion of the last transfer.
6
Mass-storage Structure
Magnetic Disks:
• head crash : occurs when make contact with the disk surface
causing damage to the magnetic surface.
A head crash normally cannot be repaired; the entire disk
must be replaced.
7
Mass-storage Structure
Magnetic Tapes:
• A tape is kept in a spool and is wound or rewound past a read–
write head.
• Magnetic tape was used as an early secondary-storage medium.
Its access time is slow compared with that of main memory and
magnetic disk.
Its random access is about a thousand times slower than random
access to magnetic disk.
9
Disk Scheduling
10
Disk Scheduling
11
Disk Scheduling
12
Disk Scheduling
First Come First Served (FCFS):
• Example: Given the following queue 95, 180, 34, 119, 11, 123, 62, 64
with read write head initially at 50 and the tail being at 199.
14
Disk Scheduling
15
Disk Scheduling
16
Disk Scheduling
SCAN (Elevator):
• Scan towards the nearest end and then when it hits the end it
scans up servicing the requests that it didn’t get going down
• Preventing starvation.
• Bounded time for each request(short service times)
• Disadvantage: Request at the other end will take a while.
• Example: Given the following queue 95, 180, 34, 119, 11, 123, 62, 64
with read write head initially at 50 and the tail being at 199.
Look:
• The arm goes only as far as the final request in a direction then it
reverses direction to the end of the disk.
• Like SCAN but stops moving inwards when no more requests in
that direction.
18
Disk Scheduling
C-LOOK:
• Enhanced version of C-SCAN
• Scanning doesn’t go after the last request in the direction .
• Moves servicing requests until there are no more requests in that
direction, and then it jumps to the outermost outstanding requests.
• Example: Given the following queue 95, 180, 34, 119, 11, 123, 62, 64 with
read write head initially at 50 and the tail being at 199.
20
RAID Structure
21
File Concept
22
File Concept
File Attributes:
• A file’s attributes vary from one operating system to another but
typically consist of these: Name – Identifier – Type – Location – Size –
Protection, Time, date, and user identification.
• Information about files is kept in the directory structure, which is
maintained on the disk.
File Operations:
• To define a file properly, we need to consider the operations that can
be performed on files.
• The operating system can provide system calls to open, close, create,
write, read, reposition, delete, and truncate files.
23
File Concept
File Types:
• an operating system recognizes
the type of a file, it can then
operate on the file in
reasonable ways.
•
24
Access Methods
Sequential Access.
• The simplest access method is sequential access. Information in the file
is processed in order, one record after the other.
• This mode of access is by far the most common;
• for example, editors and compilers usually access files in this fashion.
• Sequential access is based on a tape model of a file .
25
Access Methods
Direct Access.
• direct access or relative access.
• Here, a file is made up of fixed-length logical records that allow
programs to read and write records rapidly in no particular order.
• The direct-access method is based on a disk model of a file
26
File-system Implementation
• How files and directories are stored, how disk space is managed, and
how to make everything work efficiently and reliably
• Sector 0 of the disk is called the MBR (Master Boot Record) and is
used to boot the computer.
• When the computer is booted, the BIOS read in and execute the
MBR.
27
File-system Implementation
28
File Allocation Methods
29
File Allocation Methods
Contiguous Allocation:
30
File Allocation Methods
Contiguous Allocation:
• Requires that each file occupy a set of contiguous blocks on the disk.
• Accessing block b+1 after block b normally requires no head
movement.
• When head movement is needed, the head need only move from one
track to the next.
• The directory entry for each file indicates the address of the starting
block and the length of the area allocated for this file.
31
File Allocation Methods
Contiguous Allocation:
• Advantages:
– Number of disk seek is minimal.
– Simple, require start block & length.
– Support both sequential access (read next block) & direct access (start
block + i).
• Disadvantages:
– Finding space for new file (dynamic storage allocation).
– External fragmentation: free space broken into chunks, all is small to
allocate file.
Sol: compacting all free space into one contiguous space.
– Determining how much space needed for file.
– File cannot grow. 32
File Allocation Methods
33
File Allocation Methods
34
File Allocation Methods
• Disadvantages:
– The entire table must be in memory all the time to make it work.
– FAT idea does not scale well to large disks.
35
File Allocation Methods
Indexed Allocation:
36
File Allocation Methods
Indexed Allocation:
• Bring all the pointers together into one location called the index block.
• Each file has its own index block.
• The directory contains the address of the index block of a file.
• Disadvantages:
– Supports direct access.
– Does not suffering from external fragmentation.
• Disadvantages:
– The index block will occupy some space thus considered as an
overhead of the method.
37
Free Space Management
• Create file require search the list for the required amount of
space, this space removed from the list.
38
Free Space Management
• Implementation of Free Space List:
1. Bit Vector
• 1 bit represents one block.
• Free block bit = 1, allocated block bit=0.
• EX: disk where only blocks 2, 3, 4, 5, 8, 9, 10, 11, 12, 13, 17, 18, 25,
26, and 27 are free .
Bit vector is: 001111001111110001100000011100000 ...etc.
• Advantages:
– Simplicity
– Efficiency
– Easy to get contiguous space
• Disadvantage
– Bit vector requires extra space
39
Free Space Management
• Implementation of Free Space List:
2. Linked list
• Link all the free blocks together.
• A pointer to first free block is cashed in memory.
• Each free block contains a pointer to next free block.
• Advantage: No waste of space.
• Disadvantage: Cannot get contiguous space easily.
3. Grouping
• Store addresses of n free blocks in the first free block.
• The first n-1 of these blocks are free.
• The last block contains the addresses of other n free blocks.
• Advantage: found large number of free blocks quickly.
40
Free Space Management
41