OS Unit - III
OS Unit - III
CHAPTER 3
MEMORY MANAGEMENT
INTRODUCTION
The operating system manages the resources of the computer, controls
application launches, and performs tasks such as data protection and system
administration. The resource that the operating system uses the most is memory.
Memory is a storage area on the computer that contains the instructions and data
that the computer uses to run the applications.
When the applications or the operating system need more memory than is
available on the computer, the system must swap the current contents of the
memory space with the contents of the memory space that is being requested. In
the same way, different situations need different memory management techniques.
Some cases call for the use of paging, while others may require the use of an
on-disk cache. Ultimately, deciding which memory management technique to use is
a matter of optimizing the user interface for the available hardware and software.
Memory management is allocating, freeing, and re-organizing memory in a
computer system to optimize the available memory or to make more memory
available. It keeps track of every memory location (if it is free or occupied).
Page 71 of 159
Operating System
Physical Address Space: Physical addresses are addresses that specify actual
(real) physical locations in memory. It is a real memory location where the data is
stored. Hardware, such as the CPU and memory controller, can directly access
corresponding memory locations with physical addresses, and translation or
mapping is not involved.
Physical addresses are low-level addressing modes that refer to hardware
architecture and point to a specific computer's memory layout. The hardware uses
this mode directly to access memory locations and communicate with devices.
Page 72 of 159
Operating System
Logical Address Space: The logical addresses are the virtual addresses of a
CPU generated at run time. These do not exist in the memories as physical
addresses; they act as pointers for the CPU to access real memory locations.
Logical addresses are addresses used by software programs and operating
systems to simplify memory management and provide a more flexible and abstract
way of accessing memory or devices. Logical addresses are part of the virtual
memory abstraction, which allows programs to operate in a larger logical address
space than the available physical memory.
A logical address must be mapped or translated to its corresponding
physical addresses before the hardware can use it. This translation is typically
performed by a hardware component called the Memory Management Unit (MMU).
Using logical addresses allows for memory protection mechanisms, where different
programs or processes are isolated from each other's memory spaces, enhancing
security and stability.
Logical addresses are generally more portable and can be used across
different systems or architectures as long as the address translation mechanisms
are compatible.
The abstraction provided by logical addresses enables features like demand
paging, swapping, and shared memory, which are crucial for efficient memory
management and resource utilization.
A virtual or symbolic
Represents the actual
representation of memory
Representation physical location of data in
locations, used by software
memory or devices
programs
Generated based on the
Generated by the CPU while a
Generation hardware architecture and
program is running
memory configuration
Page 73 of 159
Operating System
SWAPPING
Swapping in OS is one of those schemes which fulfil the goal of maximum
utilization of CPU and memory management by swapping in and swapping out
processes from the main memory. Swap-in removes the process from the hard drive
(secondary memory) and swap-out removes the process from RAM (main memory).
Let's suppose several processes like P1, P2, P3, and P4 are ready to be
executed inside the ready queue, and processes P1 and P2 are very memory
consuming so when the processes start executing there may be a scenario where
the memory will not be available for the execution of the process P3 and P4 as there
is a limited amount of memory available for process execution.
Swapping in the operating system is a memory management scheme that
temporarily swaps out an idle or blocked process from the main memory to
secondary memory which ensures proper memory utilization and memory
availability for those processes that are ready to be executed.
Page 74 of 159
Operating System
There are two important concepts in the process of swapping which are as follows:
1. Swap In
2. Swap Out
Swap In: The method of removing a process from secondary memory (Hard Drive)
and restoring it to the main memory (RAM) for execution is known as the Swap
In method.
Swap Out: It is a method of bringing out a process from the main memory (RAM)
and sending it to the secondary memory (hard drive) so that the processes with
higher priority or more memory consumption will be executed known as the Swap
Out method.
Page 75 of 159
Operating System
• CPU can perform various tasks simultaneously with the help of swapping so that
processes do not have to wait much longer before execution.
• Swapping ensures proper RAM (main memory) utilization.
• Swapping creates a dedicated disk partition in the hard drive for swapped
processes which is called swap space.
• Swapping in OS is an economical process.
• Swapping method can be applied on priority-based process scheduling where
a high-priority process is swapped in and a low-priority process is swapped
out which improves the performance.
Page 76 of 159
Operating System
DYNAMIC LOADING
The process of getting a program from secondary storage (hard disk) to the
main memory (RAM) is known as loading. In simple words, loading loads the
program in the main memory.
The entire program and all data of a process must be in physical memory for
the process to execute. The size of a process is thus limited to the size of physical
memory. To obtain better memory-space utilization, we can use dynamic loading.
With dynamic loading, a routine is not loaded until it is called. All routines are kept
on disk in a relocatable load format. The main program is loaded into memory and
is executed. When a routine needs to call another routine, the calling routine first
checks to see whether the other routine has been loaded. If not, the relocatable
linking loader is called to load the desired routine into memory and to update the
program's address tables to reflect this change. Then control is passed to the newly
loaded routine.
The advantage of dynamic loading is that an unused routine is never loaded.
This method is particularly useful when large amounts of code are needed to
handle infrequently occurring cases, such as error routines. In this case, although
the total program size may be large, the portion that is used (and hence loaded)
may be much smaller.
Page 77 of 159
Operating System
Dynamic loading does not require special support from the operating system.
It is the responsibility of the users to design their programs to take advantage of
such a method. Operating systems may help the programmer, however, by
providing library routines to implement dynamic loading.
DYNAMIC LINKING
Linking all the required modules of the program to continue the program
execution is known as linking. It takes the object code through an assembler and
combines them to make an executable module.
Advantages:
• Memory Efficiency: Programs don't need to include full copies of libraries,
leading to smaller executables and efficient use of memory.
• Upgradability: Libraries can be updated independently of the programs. A bug
fix or update to the library will affect all programs that use it without needing
recompilation.
Page 78 of 159
Operating System
Disadvantages:
• Runtime Overhead: Loading libraries and resolving symbols at runtime
introduces a slight performance cost.
• Dependency Issues: If a required shared library is missing or incompatible, the
program might fail to execute properly. This is often referred to as "DLL hell" in
Windows.
Page 79 of 159
Operating System
Page 80 of 159
Operating System
0
R + Yes Is A
CPU
Physical Relocation A<L
Address register
L Error in Addressing
Max Limit
Register
Page 81 of 159
Operating System
sure that the logical address generated by the CPU is not bigger than the program's
size.
MULTIPLE PARTITIONS
This is also known as a static partitioning scheme as shown in the following
Figure. A simple memory management scheme divides memory into n (possibly
unequal) fixed-sized partitions, each of which can hold exactly one process. The
degree of multiprogramming is dependent on the number of partitions. IBM used
this scheme for systems 360 OS/MFT (Multiprogramming with a fixed number of
tasks). The partition boundaries are not movable (must reboot to move a job). We
can have one queue per partition or just a single queue for all the partitions.
d
Partition 3
c
Partition 2
Multiple Job Queues b
Partition 1
a
Partition 1
0
Multiple Partition System
Initially, whole memory is available for user processes and is like a large
block of available memory. The operating system keeps details of available memory
blocks and occupied blocks in tabular form. OS also keeps track of the memory
requirements of each process. As processes enter into the input queue and when
sufficient space for it is available, the process is allocated space and loaded. After
its execution is over it releases its occupied space and OS fills this space with other
processes in the input queue. The block of available memory is known as a Hole.
Holes of various sizes are scattered throughout the memory. When any process
arrives, it is allocated memory from a hole that is large enough to accommodate it.
This example is shown in the Figure given below:
OS OS OS
200K
Process A (50K) Process A (50K) Process A (50K)
Partition 1
(100K) Hole (50K) Process B Hole (50K) Process D Hole (50K)
Page 82 of 159
Operating System
Within a partition, if two holes are adjacent then they can be merged to
make a single large hole. However, this scheme suffers from a fragmentation
problem. Storage fragmentation occurs either because the user processes do not
completely accommodate the allotted partition or partition remains unused, if it is
too small to hold any process from the input queue. Main memory utilization is
extremely inefficient. Any program, no matter how small, occupies the entire
partition. In our example, process B takes 150K of partition2 (200K size). We are
left with a 50 K-sized hole. This phenomenon, in which there is wasted space
internal to a partition, is known as internal fragmentation. It occurs because the
initial process is loaded in a partition that is large enough to hold it (i.e., allocated
memory may be slightly larger than requested memory). “Internal” here means
memory that is internal to a partition, but is not in use.
OS OS OS
Terminated Arrives
Process B Process B Process B
COMPACTION
Compaction is a memory management technique in which the free space of a
running system is compacted, to reduce fragmentation problems and improve
memory allocation efficiency. Compaction is used by many modern operating
systems, such as Windows, Linux, and Mac OS X. As in the fig we have some used
memory (black color) and some unused memory (white color). The used memory is
combined. All the empty spaces are combined. This process is called compaction.
This is done to prevent and solve the problem of fragmentation, but it requires too
much CPU time.
Page 83 of 159
Operating System
In compaction, all the holes are contiguous, and OS combines all the loaded
processes in different partitions. Now, merged holes can accommodate new
processes according to their needs. This method is also known as de-
fragmentation. Let us explain through the diagram.
At the time of compaction, the CPU stops the execution of the current
process because it will resume the process from somewhere else after compaction.
If the CPU does not stop the execution of a process, then it may execute the
instructions from somewhere else locations instead of the next instruction of the
same process in memory.
Advantages of Compaction
• Reduces external fragmentation.
• Make memory usage efficient.
• Memory becomes contiguous.
• Since memory becomes contiguous more processes can be loaded to
memory, thereby increasing the scalability of OS.
• Fragmentation of the file system can be temporarily removed by compaction.
• Improves memory utilization as there is less gap between memory blocks.
Disadvantages of Compaction
• System efficiency is reduced and latency is increased.
• A huge amount of time is wasted in performing compaction.
• CPU sits idle for a long time.
• Not always easy to perform compaction.
• It may cause deadlocks since it disturbs the memory allocation process.
RELOCATION
Page 84 of 159
Operating System
PAGING
Paging is a memory management technique for retrieving processes from
secondary memory storage units as pages and stored in the main memory’s frame.
When a program needs to access data, it sends a request to access a process
to the operating system, which stores the process in the main memory from the
secondary memory. Each process is divided into small fixed-sized chunks
called pages; similarly, the main memory will be divided into equal-fixed-sized
pieces called frames. The process pages are stored at a different location in the
main memory. The thing to note here is that the size of the page and frame will be
the same.
Like in the above fig, every page in secondary memory is 2 Kb, and in the
same way, every frame in main memory is also 2 Kb.
The problem is that physical memory is finite. When all of the spaces in
physical memory are filled with requests, the operating system has to start
swapping the processes that are not in use to make room for new ones. This
process is called swapping.
Page 85 of 159
Operating System
Advantages of Paging
• Conserve memory by only keeping the active pages in memory. This is especially
helpful in large-scale systems where memory is scarce.
• Enables the operating system to manage more processes by allowing each
process to have its dedicated memory space. This maximizes efficiency and
performance by allowing the operating system to schedule and run each process
without conflicts.
• Allows for greater flexibility and scalability regarding the size and complexity of
the systems that can be created.
• Parts of the program are allowed to be stored at different locations in the main
memory.
• It solves the problem of external fragmentation.
• Swapping becomes very easy due to equal-sized pages and frames.
Disadvantages of Paging
• It can be very inefficient. When a process needs more memory, the operating
system must find a block of unused memory and copy it to the process.
• This process can take a long time and, in some cases, can even crash the
system.
• Paging can cause internal fragmentation, which makes the system run more
slowly.
• The page table is there, which takes some memory space.
• Have to maintain a page table for each process.
• Memory access time increases as the page table needs to be accessed.
SEGMENTATION
Page 86 of 159
Operating System
Page 87 of 159
Operating System
Advantages of Segmentation in OS
• No internal fragmentation is there in segmentation.
• Segment Table is used to store the records of the segments. The segment table
itself consumes less memory as compared to a page table in paging.
• Segmentation provides better CPU utilization as an entire module is loaded at
once.
• Segmentation is near to the user's view of physical memory. Segmentation allows
users to partition the user programs into modules. These modules are nothing
but the independent codes of the current process.
• The Segment size is specified by the user but in Paging, the hardware decides
the page size.
• Segmentation can be used to separate the security procedures and data.
Disadvantages of Segmentation in OS
• During the swapping of processes, the free memory space is broken into small
pieces, which is a major problem in the segmentation technique.
• Time is required to fetch instructions or segments.
• The swapping of segments of unequal sizes is not easy.
• There is an overhead of maintaining a segment table for each process as well.
• When a process is completed, it is removed from the main memory. After the
execution of the current process, the unevenly sized segments of the process are
removed from the main memory. Since the segments are of uneven length it
creates unevenly sized holes in the main memory. These holes in the main
memory may remain unused due to their very small size.
Page 88 of 159
Operating System
As shown in the image below, the base address of Segment-0 is 1400 and its
length is 1000, the base address of Segment-1 is 6300 and its length is 400, the
base address of Segment-2 is 4300 and its length is 400, and so on.
The pictorial representation of the above segmentation with its segment table
is shown below.
Page 89 of 159
Operating System
The page number is an index into the page table, each item of which
represents a page frame. The physical address is obtained by adding the PFN (page
frame number) and the offset. As a result, addressing may be defined by the
function:
va = (s,p,d)
here,
va is the virtual address,
s determines the number of segments (size of ST),
p determines the number of pages per segment (size of PT),
d determines page size.
Page 90 of 159
Operating System
Page 91 of 159
Operating System
segmentation.
Paging could result in internal Segmentation could result in external
fragmentation. fragmentation.
In paging, the logical address is split Here, the logical address is split into
into a page number and a page offset. segment number and segment offset.
Paging comprises a page table that Segmentation also comprises the
encloses the base address of every segment table which encloses the
page. segment number and segment offset.
In segmentation, the operating system
In paging, the operating system must
maintains a list of holes in the main
maintain a free frame list.
memory.
Paging is invisible to the user. Segmentation is visible to the user.
In paging, the processor needs the page In segmentation, the processor uses
number, and offset to calculate the segment number, and offset to calculate
absolute address. the full address.
It is hard to allow the sharing of Facilitates sharing of procedures
procedures between processes. between the processes.
n paging, a programmer cannot It can efficiently handle data
efficiently handle data structure. structures.
Easy to apply for protection in
This protection is hard to apply.
segmentation.
The size of the page needs to always be There is no constraint on the size of
equal to the size of the frames. segments.
A page is referred to as a physical unit A segment is referred to as a logical
of information. unit of information.
Segmentation results in a more efficient
Paging results in a less efficient system.
system.
PROTECTION
When several users share computer resources such as CPU, memory, and
other resources, security is more crucial. It is the job of the operating system to
provide a mechanism that protects each process from other processes. All assets
that require protection in a multiuser environment are categorized as objects, and
individuals who seek to access these things are referred to as subjects. Distinct
'access privileges are granted to different subjects by the operating system.
Protection is a method that limits the access of programs, processes, or
users to the resources defined by a computer system. Protection can be used to
allow several users to safely share a common logical namespace, such as a
directory or files, in multi-programming operating systems. It necessitates the
safeguarding of computer resources such as software, memory, and processors. As
assistance to multiprogramming OS, users should apply protective steps so that
Page 92 of 159
Operating System
several users can safely access a common logical namespace like a directory or
data. Maintaining secrecy, honesty, and availability in the OS might provide
protection. The device must be protected against unauthorized access, viruses,
worms, and other malware.
Need of protection OS
• Isolation: Protection OS ensures isolation between different processes and
users, preventing unauthorized access to resources.
• Security: It protects system resources, such as memory and files, from
unauthorized access, modification, or corruption.
• Stability: Protection OS enhances system stability by preventing one process
from interfering with or crashing other processes.
• Fairness: It ensures fair resource allocation among competing processes,
preventing one process from monopolizing system resources.
Page 93 of 159
Operating System
software that handles or manages the files (binary, text, pdf, docs, audio, video,
etc.) present in computer software. The file system in the operating system is
capable of managing individual as well as groups of files present in the computer
system.
The file management in the operating system manages all the files present in
the computer system with various extensions (such as .exe, .pdf, .txt, .docx, etc.)
We can also use the file system in the operating system to get details of any file(s)
present on our system. The details can be:
• location of the file (the logical location where the file is stored in the computer
system)
• the owner of the file (who can read or write on the particular file)
• when was the file created (time of file creation and modification time)
• a type of file (format of the file for example text, pdfs, docs, etc.)
• state of completion of the file, etc.
For file management in the operating system or to make the operating
system understand a file, the file must be in a predefined structure or format.
There are three types of file structures present in the operating systems:
1. text file: A text file is a non-executable file containing a sequence of numbers,
symbols, and letters organized in the form of lines.
2. source file: A source file is an executable file that contains a series of functions
and processes. In simple terms, we can say that a source file is a file that
contains the instructions of a program.
3. object file: An object file is a file that contains object codes in the form of
assembling language code or machine language code. In simple terms, we can
say that object files contain program instructions in the form of a series of
bytes and are organized in the form of blocks.
Page 94 of 159
Operating System
• The file management in the operating system allows users to create a new file,
and modify and delete the old files present at different locations of the computer
system.
• The operating system file management software manages the locations of the file
store so that files can be extracted easily.
• As we know, process shares files so, one of the most important features of file
management in operating systems is to make files sharable between processes. It
helps the various processes to securely access the required information from a
file.
• The operating system file management software also manages the files so that
there is very little chance of data loss or data destruction.
• The file management in the operating system provides input-output operation
support to the files so that the data can be written, read, or extracted from the
file(s).
• It also provides a standard input-output interface for the user and system
processes. The simple interface allows the easy and fast modification of data.
• The file management in the operating system also manages the various user
permissions present on a file. There are three user permissions provided by the
operating system, they are: read, write, and execute.
• The file management in the operating system supports various types of storage
devices such as flash drives, hard disk drives (HDD), magnetic tapes, optical
disks, tapes, etc., and it also allows the user(s) to store and extract them
conveniently.
• It also organizes the files in a hierarchal manner in the form of files and folders
(directories) so that management of these files can be easier from the user's
perspective as well. Refer to the diagram below for better visualization.
Page 95 of 159
Operating System
Page 96 of 159
Operating System
• Sequential access will become slow if the next file record to be retrieved is not
present next to the currently pointed record.
• Adding a new record may need relocating a significant number of records of the
file.
Page 97 of 159
Operating System
search the indexes and then use the pointer-to-pointer concept to navigate to the
required file.
Primary index blocks contain the links of the secondary inner blocks which contain
links to the data in the memory.
FILE DIRECTORIES
On a computer, a directory is used to store, arrange, and segregate files and
folders. It is similar to a telephone directory in that it just contains lists of names,
phone numbers, and addresses rather than the real papers. It uses a hierarchical
structure to organize files and directories. On many computers, directories are
referred to as drawers or folders, much like a workbench or a standard filing
cabinet in an office. You may, for instance, create a directory for images and
another for all of your documents. You could easily access the type of file you
wanted to see by saving particular file types in a folder.
Page 98 of 159
Operating System
There are several logical structures of a directory, these are given below.
• Single-level directory
• Two-level directory
• Tree structure or hierarchical directory
• Acyclic graph directory
• General graph directory structure and Data Structure
Page 99 of 159
Operating System
users wanted to create a file with the same name (say report.doc), it was not
allowed in a single-level directory.
In a two-level directory structure, there is a master node that has a separate
directory for each user. Each user can store the files in that directory. It can be
practically thought of as a folder that contains many folders, each for a particular
user, and now each user can store files in the allocated directory just like a single-
level directory.
The pictorial representation of a two-level directory is shown below. For
every user, there is a separate directory. At the next level, every directory stores the
files just like a single-level directory. Although not very efficient, the two-level
directory is better than a single-level directory structure.
In the above image, we can see that a cycle is formed in the user 2 directory.
Although it provides greater flexibility, it is complex to implement this structure.
Advantages of General-graph directory
• Compared to the others, the General-Graph directory structure is more
flexible.
• Cycles are allowed in the directory for general graphs.
The operating system also maintains a directory table that includes the file
name along with the starting address and the length of the blocks allocated. The
length represents the number of disk blocks required by the file.
In the above figure, it can be seen that the file "os.pdf" requires four disk
blocks and its starting disk block address is 1. Therefore, the blocks allocated to
the file are 1, 2, 3, and 4. Similarly, for “dbms.doc” the blocks allocated are 7, 8,
and 9 since its starting address is 7 and length is 3.
Advantages of contiguous file allocation
• Since the blocks are allocated in sequential order, therefore it can be accessed
sequentially since the starting address and the length are already available in
the directory table.
• The block allocation is similar to the array. Given the starting address, we can
"jump" to any block address by simply adding the block size to the starting
address, just as we do while accessing any block in an array. Hence the
contiguous allocation also allows random access to the blocks.
• The seek time is less because of the contiguous allocation of blocks. This makes
it very fast.
Disadvantages of contiguous file allocation
• It suffers from internal fragmentation. Suppose the size of a block is 2KB, but
the file that has to be stored is just 1 KB. In that case, an extra 1KB remains
unutilized and the memory is wasted.
• It suffers from external fragmentation. If there are sufficient blocks available to
store a file, but if they are not contiguous, the file cannot be stored.
• The size of the file can be increased only if free contiguous disk blocks are
available.
• Although it allows fast access, memory management is poor.
Linked File Allocation: The linked allocation works just like the linked list.
The problem with contiguous allocation was that memory remained unutilized due
to external fragmentation. The solution to the problem was to allocate the disk
block in the form of a linked list where every block was a node.
The blocks are allocated in such a way that every block contains a pointer to
the next block that is allocated to the file.
The above image shows how the linked allocation works. The file "os.pdf" has
to be allocated some blocks. The first block allocated is 4. Block 4 will have a
pointer to block 8, block 8 will have a pointer to block 10, block 10 will have a
pointer to block 11, block 11 will have a pointer to block 2, and finally, block 2 will
point to 3. In this manner, a total of six blocks are allocated to the file non-
contiguously. The ending block (block 3) will not point to any other block.
Advantages of linked file allocation
• There is no external fragmentation because blocks can be allocated in random
order with the help of pointers. Contiguous allocation is not required.
• File size can be increased, even if the contiguous blocks are not available,
provided there are enough blocks to store the file.
• Judicious use of memory.
From the above image, we can see that block number 8 does not store the
file but contains the pointers to various other blocks, which store the file. The
directory table contains only the file name and the index block for the respective
files.
Below is the pictorial representation of index block 8, which contains the
pointers that determine the address of the blocks that store the "os.pdf" file.
Since the size of every block is limited, there will be problems if the numbers
of pointers to other blocks are very large in number such that a block is not
sufficient to store it.
Advantages of Indexed allocation
• No external fragmentation.
• Allows random access to disk blocks.
• Allows direct access, reducing complexity.
Disadvantages of Indexed allocation
• It is very complex.
• Extra memory for index blocks.
• Large pointer overhead.
• For very large files, a single index block may not be able to hold all the
pointers.
File Allocation Table (FAT): The File Allocation Table (FAT) is a file system
format commonly used in older computer systems and removable storage devices.
It organizes data storage by maintaining a table that tracks the allocation status of
individual file clusters on a storage medium. While less common today, FAT was
instrumental in early computing, providing a straightforward way to manage files
and directories.
Advantages of File Allocation Table (FAT)
• FAT is widely supported across different operating systems and devices, making
it easy to share data between various platforms without compatibility issues.
• FAT's straightforward structure makes it relatively easy to implement and
understand, which was particularly advantageous in the early days of
computing.
• The minimalistic design of FAT requires less storage space and processing power
compared to more modern file systems, making it suitable for systems with
limited resources.
• Due to its simplicity, FAT file systems are often recoverable using basic data
recovery tools, allowing for potential retrieval of lost or deleted data.
Inode: An inode, short for "index node," is a data structure in Unix-like file
systems that stores metadata about a file, such as its permissions, ownership, size,
and location of data blocks on the storage device. Inodes play a crucial role in
efficient file management and data retrieval, as they enable the operating system to
quickly locate and access files. Each file or directory on the file system corresponds
to an inode, allowing for organized and optimized storage allocation.
Advantages of Inode
• Inodes enable rapid access to file metadata and data blocks, making file
operations like opening, reading, and writing faster and more efficient.
• Inodes allow sparse files—files with unallocated gaps—to be managed effectively,
as they only allocate space for actual data blocks, optimizing storage usage.
• Inodes facilitate the creation of hard links, which are multiple directory entries
pointing to the same inode.
• Inodes enhance file system stability by maintaining data consistency and
reducing the risk of data corruption.
Disadvantages of Inode
• Inodes consume a fixed amount of storage space regardless of file size.
• File systems have a finite number of inodes available.
• As directories grow in size, the number of inodes used to represent directory
entries can increase.
• Inode allocation and management can contribute to storage fragmentation.
REVIEW QUESTIONS
1. Write short notes on (1) Swapping and (2) Memory Allocation.
2. What is dynamic loading and dynamic linking? Explain.
3. Describe the following memory allocation methods: (1) Single partition
allocation and (2) Multiple partition allocation.
4. What is paging? Explain.
5. Explain segmentation with paging.
6. Write short notes on (i) Paging and (ii) Compaction.
7. What is swapping? Explain the swap-in and swap-out process with a well-
labelled diagram.
8. Explain memory management requirements.
9. Explain: (i) Logical vs. physical address space and (ii) Internal vs. external
fragmentation.
10. Explain the single partition allocation mechanism with an example.
11. Explain the concept of paging.
12. What is Paging? Explain. Write the advantages and disadvantages of paging.
13. What is Segmentation? Explain. Write advantages of segmentation.
14. What is Long Term Scheduling?
15. Explain Relocation.
16. Explain the method of multiple partition memory management.
17. Write a short note on 'relocation and protection'.
18. What is disk space management? Explain record blocking.
19. Differentiate between logical and physical address space.
20. Differentiate between paging and segmentation.