Lecture 07 - Memory Management
Lecture 07 - Memory Management
Objectives Outline
• Describe ways of organizing memory • Background
hardware – Address space
• Discuss various memory-management – Logical address space
techniques, including paging and
– MMU
segmentation
• Contiguous Memory Allocation
• Description of the Intel Pentium, which
supports both pure segmentation and • Paging
segmentation with paging – Structure of the Page Table
• Segmentation
• Example: The Intel Pentium
2
Background
• Program must be brought (from disk) into memory and placed within a process for it to be run
• Main memory and registers are only storage CPU can access directly
• Register access in one CPU clock (or less)
• Main memory can take many cycles
• Cache sits between main memory and CPU registers
• Protection of memory required to ensure correct operation
3
Background
Main Memory
CPU cache
instructions
Registers
data Process
program image
in memory
Operating
System
Disk
4
Multiple programs
5
Protection: Base and Limit Registers
A pair of base and limit registers define the physical address space of a process
A process should be accessing
and using that range.
Protection can be
provided in this way.
6
Protection
7
Binding of Instructions and Data to Memory
8
Multistep Processing of a User Program
9
Program addresses and memory
func variable
• In very early systems they are just physical memory addresses.
– A program has to be loaded to that address to run. main
– No relocation
program
10
Program addresses and memory
physical addresses
RAM of RAM
Assume they are physical addresses
44
40
Program 36
32
Add 12 28
Mov 8 24
… 4 20
Jump 8 0 16
Add 12
Mov 8
… 4
Jump 8 0
11
Program addresses and memory
physical addresses
RAM of RAM
44
40
36
32
Cmp 28
Program 2
Sub 24
Program 1 Program 2
… 20
Jump 12 16
Add 12 Cmp 12
Add 12
Program 1
Mov 8 Sub 8
Mov 8
… 4 … 4
… 4
Jump 8 0 Jump 12 0
Jump 8 0
12
Logical address space concept
13
Logical vs. Physical Address Space
• The concept of a logical address space that is bound to a separate physical address space is
central to proper memory management
• Logical and physical addresses are the same in compile-time and load-time address-binding
schemes;
• Logical (virtual) and physical addresses differ in execution-time address-binding scheme
14
Logical and physical addresses
CPU Main Memory (RAM)
base limit 60
24 32 56
int x 52
int y; 48
PC
physical addresses
M[28+base] cmp .. 44
IR mov r1, M[28] mov r1, M[28] 40
M[28+24] mov r2, M[24] 36
M[52] add r1, r2, r3
a relocatable program 32
jmp 16 28
28 int x mov .. 24
logical addresses
24 int y;
20
20 cmp ..
16
16 mov r1, M[28]
12
12 mov r2, M[24]
08
08 add r1, r2, r3
04
04 jmp 16
00
00 mov ..
15
Memory-Management Unit (MMU)
• In MMU scheme, the value in the relocation register (i.e., base register) is added to every
address generated by a user process at the time it is sent to memory
• The user program deals with logical addresses; it never sees the real physical addresses
16
Dynamic relocation using a relocation register
17
Dynamic Loading
18
Dynamic Linking
19
Swapping
• A process can be swapped temporarily out of memory to a backing store, and then brought back
into memory for continued execution
• Backing store – fast disk large enough to accommodate copies of all memory images for all
users; must provide direct access to these memory images
• Roll out, roll in – swapping variant used for priority-based scheduling algorithms; lower-priority
process is swapped out so higher-priority process can be loaded and executed
• Major part of swap time is transfer time; total transfer time is directly proportional to the amount
of memory swapped
• Modified versions of swapping are found on many systems (i.e., UNIX, Linux, and Windows)
• System maintains a ready queue of ready-to-run processes which have memory images on
disk
20
Schematic View of Swapping
21
Contiguous Memory Allocation
(Dynamic Memory Allocation Problem)
Adapted from the textbook
Operating System Concepts – 10th Edition
• Relocation registers used to protect user processes from each other, and from changing
operating-system code and data
– Base register contains value of smallest physical address
– Limit register contains range of logical addresses – each logical address must be less than
the limit register
– MMU maps logical addresses dynamically
23
Basic Memory Allocation Strategies
• In this lecture, we will cover 3 basic main memory allocation strategies to processes
– 1) Contiguous allocation
– 2) Paging
– 3) Segmentation
24
Hardware Support for Relocation and Limit Registers
25
Contiguous Allocation
• Multiple-partition allocation
– Hole – block of available memory; holes of various size are scattered throughout memory
– When a process arrives, it is allocated memory from a hole large enough to accommodate it
– Operating system maintains information about:
a) allocated partitions b) free partitions (hole)
OS OS OS OS
process 8 process 10
26
Dynamic Storage-Allocation Problem
First-fit and best-fit better than worst-fit in terms of speed and storage utilization
27
Dynamic Storage-Allocation Problem
• Given six memory partitions of 300 KB, 600 KB, 350 KB, 200 KB, 750 KB, and 125 KB (in
order), how would the first-fit algorithms place processes of size 115 KB, 500 KB, 358 KB, 200
KB, and 375 KB (in order)?
– 115 KB is put in 300-KB partition, leaving 185 KB, 600 KB, 350 KB, 200 KB, 750 KB, 125 KB
– 500 KB is put in 600-KB partition, leaving 185 KB, 100 KB, 350 KB, 200 KB, 750 KB, 125 KB
– 358 KB is put in 750-KB partition, leaving 185 KB, 100 KB, 350 KB, 200 KB, 392 KB, 125 KB
– 200 KB is put in 350-KB partition, leaving 185 KB, 100 KB, 150 KB, 200 KB, 392 KB, 125 KB
– 375 KB is put in 392-KB partition, leaving 185 KB, 100 KB, 150 KB, 200 KB, 17 KB, 125 KB
• Best-fit?
• Worst-fit?
28
Fragmentation
• External Fragmentation – total memory space exists to satisfy a request, but it is not contiguous
• Internal Fragmentation – allocated memory may be slightly larger than requested memory; this
size difference is memory internal to a partition (allocation), but not being used
• Reduce external fragmentation by compaction
– Shuffle memory contents to place all free memory together in one large block
– Compaction is possible only if relocation is dynamic, and is done at execution time
– I/O problem
• Latch job in memory while it is involved in I/O
• Do I/O only into OS buffers
29
Segmentation
32
Logical View of Segmentation
segment 1
4
1
3 2
4
33
Segmentation Architecture
• Segment table – maps two-dimensional logical addresses; each table entry has:
– base – contains the starting physical address where the segments reside in memory
– limit – specifies the length of the segment
• Segment-table base register (STBR) points to the segment table’s location in memory
• Segment-table length register (STLR) indicates number of segments used by a program;
segment number s is legal if s < STLR
34
Segmentation Hardware
35
Segmentation Architecture
• Protection
– With each entry in segment table associate:
• validation bit = 0 Þ illegal segment
• read/write/execute privileges
• Protection bits associated with segments; code sharing occurs at segment level
– Code segment: READONLY; sharable; …
– Data segment: RED-WRITE; not-sharable
36
Example of Segmentation
37
Paging
39
Paging
RAM (Physical Memory)
a program
0
0
40
Paging
RAM
a program
0
0
0 1
1
2 2
2
3
3 load
1 4
4
5
5 0 mapped_to 1
1 mapped_to 4 3 7
2 mapped_to 2 5 6
3 mapped_to 7
4 mapped_to 9 8
5 mapped_to 6 4 9
page table
41
Example
42
Address Translation Scheme
– Assume Logical Addresses are m bits. Then logical address space is 2m bytes.
– Assume page size is 2n bytes.
43
Simple example
Assume m is 3 and n is 2
Logical addresses
000 000
001 001 page0
010 010
011 011
100 100
101 101 page1
110 110
111 111
44
Paging Hardware:
address translation
45
Paging and Address Translation Example
LA = 5
page size = 4 bytes
PA = ?
= 22
5 is 0101
PA = 11001
32 byte memory
LA = 11
PA = ?
11 is 1011
PA = 00111
offset
page (dispacement) LA = 13
number inside PA = ?
page
13 is 1101
PA = 01001
46
Address translation example 2
15 000 0
16 bit logical address 14 000 0
0010000000000100 13 000 0
12 000 0 page size = 4096 bytes
p# offset (offset is 12 bits)
11 111 1
10 000 0
9 101 1
8 000 0
mapping 7 000 0
6 000 0
5 011 1
frame number
4 100 1
valid/invalid bit
f# offset 3 000 1
2 110 1
110 000000000100 1 001 1
15 bit physical address 0 010 1
page table
47
Address translation example 3
OS keeps info
about the frames
in its frame table
Currently running
process is process 1 (P1) PCB1 PCB2
50
TLB
• The two memory access problem can be solved by the use of a special fast-lookup hardware
cache called associative memory or translation look-aside buffers (TLBs)
51
TLB Associative Memory
Page # Frame #
52
Paging Hardware With TLB
53
Effective Memory Access Time
54
Memory Protection
55
Valid (v) or Invalid (i) Bit In A Page Table
56
Page Table Entry Structure
• A typical size of a page table entry can be 32 bits. Depends on the architecture
57
Shared Pages
• A program can be started multiple times (same program executed by multiple process)
– Code pages can be shared
– Shared code
• One copy of read-only (reentrant) code shared among processes (i.e., text editors,
compilers, window systems).
• Shared code must appear in same location in the logical address space of all processes
58
Shared Pages Example
59
Structure of the Page Table
• Hierarchical Paging
61
Hierarchical Page Tables
• Break up the logical address space into subspaces; multiple page tables
– Breaking the page table into sub-tables (paging the page table)
• A simple technique is a two-level page table
00
01
00
01
10
11
10
PT
11
PT
Log Mem Log Mem
62
Two-Level Paging Scheme
63
Two-Level Paging Scheme
logical address logical address
offset offset
00
0000 01
0001 10
0010 11
0011
0100 00
0101 01
0110 00 10
0111 01 11
1000 10
1001 11 00
1010 01
1011 10
1100 11
1101
1110
1111 two-level 00
01
single level page table 10
Page table 11
64
Address-Translation Scheme
• Assume:
– 32 bit virtual (logical addresses) used
– 2 level paging
– Address split scheme: [10, 10, 12]
• 10 bits page number part 1
• 10 bits for page number part 2
• 12 bits for offset
– Page size = 2^12 = 4096 bytes = 4 KB
– An outer (top level) page table will have 2^10 entries = 1024 entries
– An innner (second level) page table will have 2^10 = 1024 entries
66
Example: two level page table need
67
Example: two level page table need
69
Example: two level page table need
70
Three-level Paging Scheme
64 bit addresses
71
Hashed Page Tables
72
Hashed Page Table
frame number
virtual page number
73
Inverted Page Table
• Entry consists of the page number of the virtual page stored in that real memory location
(frame), with information about the process that owns that page
– Entry content: <pid, virtual page number>
• Decreases memory needed to store each page table, but increases time needed to search the
table when a page reference occurs
74
Inverted Page Table Architecture
75
Intel x86 architecture
support for memory management
Adapted from the textbook
Operating System Concepts – 10th Edition
77
Logical to Physical Address Translation in IA32
SEGMENTATION PAGING
UNIT UNIT
linear
segment s address d
79
Intel IA32 Segmentation
base
base+offset
80
IA32 Paging Architecture
• For 4 MB pages, we can skip the inner page tables (secondary page tables). A top level page
table entry will point directly to a 4 MB page.
81
IA32 Paging Architecture
82
4 level paging
• Intel 64 bit architecture supports four level paging. Linux supports that as well.
• Intel 64 bit architecture uses 48 bit virtual addresses. Address division scheme is: [9,9,9,9,12]
offset
48 bit virtual address
83
Linux on IA32: Segmentation
• Linux is designed to run on different hardware platforms: Pentium, Arm, Motorola, Sparc, MIPS,
…
• Therefore it does not rely on segmentation and makes minimal use of segmentation in Intel IA32
archutecture.
84
Linux Paging: some kernel structures
struct task_struct
{
…
…
struct mm_struct *mm;
struct mm_struct
} {
the PCB object …
top level
of a process X pgd_t *pgd;
page table
….
of process X
} (called
mm object
page
of process X
global
(keeps memory
directory)
management
related
information)
85
References
• The slides here are adapted/modified from the textbook and its slides: Operating System Concepts,
Silberschatz et al., 7th & 8th editions, Wiley.
• Operating System Concepts, 7th and 8th editions, Silberschatz et al. Wiley.
• Modern Operating Systems, Andrew S. Tanenbaum, 3rd edition, 2009.
86
End of Chapter