practice problem set OS
practice problem set OS
1. Define memory management in the context of a basic bare machine and resident monitor.
Memory Management: In the context of a basic bare machine, memory management refers to the direct
execution of programs on the hardware without an operating system. The bare machine accepts
programs and instructions in machine language, which made it inefficient and cumbersome after the
development of operating systems2.
In the context of a bare machine, memory management is quite straightforward because there is no
operating system to manage the memory. The entire memory is available to the user program except for
a small portion reserved for the system. The user has direct control over where and how memory is
allocated and deallocated. This provides maximum flexibility but also places a lot of responsibility on
the programmer to manage memory efficiently and avoid errors such as memory leaks or buffer
overflows.
Advantages of this approach include:
Simplicity
Low cost as no additional hardware/software is required
Maximum flexibility to the user
Disadvantages include:
No services provided
Only highly technical people could use the machine
The approach could only be used on dedicated systems
Resident Monitor: A resident monitor runs on a bare machine and acts like an operating system,
controlling everything inside a processor and performing all functions3. It sequences jobs and loads
programs into the main memory for execution, offering the advantage of no lag between program
executions.
A resident monitor is a type of operating system where the monitor (a program) resides in the memory at
all times. The monitor is responsible for managing the memory and ensuring that each program gets its
fair share of memory resources. The memory is divided into two parts: one for the monitor and one for
the user program. When a user program is loaded into memory, the monitor checks to ensure that it does
not overlap with the monitor’s memory space. The monitor also handles interrupts and transfers control
between user programs. This type of memory management provides more protection and efficiency
compared to a bare machine, but it also requires more system resources to manage the memory.
Advantages of the approach:
An improvement on bare machine strategy
The space for the resident monitor could grow or shrink with time and the fence address change in
accordance
The OS could run in monitor mode
Disadvantages of the approach:
The objective of multiprogramming was still a problem
Limited services provided by the OS
Fence address must be static during execution of a program
When a process is ready to run, it’s selected from the input queue and loaded into an available partition. Once the
process completes its execution, the partition is freed up and becomes available for another process. Importantly,
each process is allocated a specific partition and is only permitted to use the memory within that partition.
However, this method can lead to certain inefficiencies. For instance, internal fragmentation can occur when a
process’s memory requirements are smaller than the size of its allocated partition, resulting in unused memory
within the partition. Additionally, external fragmentation can arise when the total unused space across various
partitions is sufficient to load a process, but this space is not contiguous.
2. Discuss the advantages and limitations of this memory management technique.
• Advantage:
• Implementation is simple.
• Processing overhead is low.
• Disadvantage:
• Limit in process size.
• Degree of multiprogramming is also limited.
• Causes External fragmentation because of contiguous memory allocation.
• Causes Internal fragmentation due to fixed partition of memory.
4. Explain the challenges and benefits of managing memory with variable partitions.
Variable Partition Challenges:
o External Fragmentation: Memory is allocated as processes enter the system,
leading to small unusable holes.
o Memory Allocation Complexity: The lack of fixed partition sizes increases
complexity in memory allocation.
Variable Partition Benefits:
o No Internal Fragmentation: Unlike fixed partitions, variable partitions do not
waste space within allocated memory.
o Flexible Process Size: There is no limitation on the number of processes or their
sizes, allowing for more flexibility.
11. Define demand paging and explain its role in virtual memory.
Demand paging is a memory management scheme that loads pages into memory only when
they are needed for execution, rather than loading the entire process at once1. Here’s how it
plays a role in virtual memory:
Efficient Memory Usage: It allows for more efficient use of physical memory, as only
necessary pages are loaded, supporting larger programs and faster program start 2.
Page Fault Handling: When a process tries to access a page not in memory, a page fault occurs,
and the system loads the required page from disk3.
Virtual Address Space: It enables a process to have a larger virtual address space than the
physical memory, allowing for the execution of larger programs.
System Performance: By reducing the number of I/O operations needed to load or swap
programs into memory, it improves system performance and CPU utilization.[ 4]4
15. Discuss the effects of thrashing on system performance. Explain strategies to mitigate thrashing.
Part 67-70 of the current page discusses the effects of thrashing on system performance and
strategies to mitigate it:
Effects of Thrashing:
o Thrashing occurs when a system spends excessive time swapping data between RAM
and virtual memory due to high memory demand and low available resources 1.
o It leads to a significant decrease in system performance, as the CPU is more occupied
with swapping pages than executing code2.
Mitigation Strategies:
o Increase Main Memory: Adding more physical memory can reduce the need for
swapping, thus mitigating thrashing.
o Efficient Scheduling: Utilizing long-term schedulers more efficiently can help prevent
system overload and reduce thrashing3.
o Resource Allocation: Proper allocation of resources and limiting the degree of
multiprogramming can prevent excessive swapping and thrashing.\
20. Discuss how memory management techniques leverage locality of reference to improve
performance.
Part of the current page discusses how memory management techniques utilize the concept of
Locality of Reference to enhance performance:
Temporal Locality: This refers to the tendency of a program to access the same set of
memory locations repeatedly over a short period of time1. Memory management
techniques leverage this by keeping recently accessed data in cache memory, which is
faster to access than main memory.
Spatial Locality: This concept is based on the likelihood of programs accessing
memory locations that are close to those recently accessed. Memory management
techniques take advantage of spatial locality by loading neighboring memory locations
into cache memory in anticipation of future access.
Cache Memory: The use of cache memory is a direct application of locality of
reference. Cache memory stores frequently accessed data and instructions, allowing for
quicker access by the CPU and reducing the average time to access data from the main
memory2.
Cache Operation: When a CPU accesses data, it first checks the cache. If the data is
present (cache hit), it is quickly retrieved. If not (cache miss), the data is fetched from
main memory and stored in the cache for future access, capitalizing on temporal
locality. Memory management systems are designed to optimize this process to
minimize cache misses and improve overall system performance.
22. Explain how protection schemes are implemented to ensure the security and integrity of memory.
Certainly! Protection schemes in memory management are designed to control the access of
programs, processes, or users to the resources of a computer system1. Here’s how the current
web page explains the implementation of protection schemes:
Access Control: Protection schemes ensure that only authorized processes can operate on files,
memory segments, CPU, and other resources.
Protection Domains: Each process operates within a domain that specifies the resources it can
access and the operations it can perform.
Security Measures: The operating system employs various security measures like passwords,
encryption, and authentication techniques to prevent unauthorized access.
Protection Goals: The main goal is to enforce policies that define how resources are used by
processes, ensuring data and process security1.
These mechanisms work together to maintain the security and integrity of the system by
preventing unauthorized access and misuse of resources. Protection in operating systems is a
critical aspect that supports safe sharing of resources and compliance with security standards.
The page provides a detailed discussion on the need, goals, and roles of protection in an OS2.
1. Consider a computer system with a 32-bit virtual address space and a page size of 4KB. If the system
uses a two-level page table with each table fitting in a single page, what is the size of the outer page
table?
2. Consider a system with a 32-bit virtual address space and a page size of 4KB. If the page table entry
size is 8 bytes, what is the size of the page table in bytes for a process that has 64 entries in its page
table?
3. Given a system with a 128-bit virtual address space and a page size of 32KB, if the page table entry
size is 8 bytes, what is the maximum number of page table entries that can fit in a single page?
4. Consider a machine with 64 MB physical memory and a 32bit virtual address space. If the page size is
4 KB, what is the approximate size of the page table?
5. Consider a computer system with a 36-bit virtual address space and a page size of 2KB. If the page
table entry size is 6 bytes, what is the size of the page table in bytes for a process that has 128 entries in
its page table?
6. Assume an average page-fault service time is 25 milliseconds and a memory access time is 100
nanoseconds. Find the Effective Access Time? Effective Access Time (EAT)= (1 – p) x (ma) + p x
(page fault time)
Q1 Consider a system with three resource types (A, B, C) and three processes (P1, P2, P3). The maximum
resource requirement, allocation, and current available resources for each process and resource type are given
below:
Using the Banker's algorithm, determine if the system is in a safe state or deadlock. Show your work
step by step, including the calculation of the need matrix, the work and finish arrays, and the sequence
of processes. If the system is in deadlock, explain which processes are deadlocked and why. Propose a
resource allocation strategy that prevents deadlock in the given system. Justify your choice of strategy
and explain how it avoids deadlock.
Q2. Consider a system with four resource types A, B, C, and D, and four processes P1, P2, P3, and P4.
The maximum resource requirement and allocation for each process are as follows:
Process P1: Max (2, 1, 1, 2), Allocation (1, 0, 0, 1)
Process P2: Max (2, 2, 1, 1), Allocation (1, 1, 0, 0)
Process P3: Max (1, 2, 2, 1), Allocation (0, 1, 1, 0)
Process P4: Max (1, 1, 1, 1), Allocation (1, 1, 1, 1)
Initially, the available resources are (2, 1, 1, 2). Determine whether the system is in a deadlock state or
not. If so, identify the deadlock and the processes involved. If not, explain why the system is deadlock-
free.
1. The sequence of requests for blocks of size 300, 25, 125, 50 can be satisfied if we use Either first fit or
best fit policy (any one).
In memory management, both first fit and best fit memory allocation strategies could
potentially satisfy a sequence of block size requests of 300, 25, 125, 50, depending on the initial
state of the memory. Here’s a brief explanation of how each strategy would work:
First Fit: This strategy allocates the first available block of memory that is large
enough to accommodate the requested size. It scans memory from the beginning and
stops at the first block that is big enough.
Best Fit: This strategy searches for the smallest block of memory that is large enough
to accommodate the requested size. It aims to find the block that will leave the least
amount of leftover memory after the allocation.
For the given sequence:
1. A request for a block of size 300 would be allocated to the first (in First Fit) or the
smallest (in Best Fit) available block of memory that is at least 300 units.
2. A request for a block of size 25 would follow, looking for the next block in the case of
First Fit or the smallest adequate block for Best Fit.
3. The process would repeat for block sizes 125 and 50.
The success of these requests largely depends on the initial state of the memory. If there are
enough blocks of appropriate sizes available, then both policies can satisfy the requests without
issue. However, if the memory is heavily fragmented or there are not enough suitable blocks,
then one or both strategies might fail to allocate memory for all requests.
2. Consider six memory partitions of size 200 KB, 400 KB, 600 KB, 500 KB, 300 KB and 250 KB. These
partitions need to be allocated to four processes of sizes 357 KB, 210 KB, 468 KB and 491 KB in that
order. Perform the allocation of processes using-First Fit Algorithm Best Fit Algorithm Worst Fit
Algorithm.
3. Consider the requests from processes in given order 300K, 25K, 125K, and 50K. Let there be two
blocks of memory available of size 150K followed by a block size 350K.
Which of the following partition allocation schemes can satisfy the above requests?
A) Best fit but not first fit.
B) First fit but not best fit.
C) Both First fit & Best fit.
D) neither first fit nor best fit.
4. Consider a system with 32 KB of physical memory and the following memory allocation requests
from processes: Process P1 requests 10 KB of memory. Process P2 requests 6 KB of memory
Process P3 requests 12 KB of memory. Process P4 requests 4 KB of memory. Process P5 requests 8
KB of memory. The system uses the following memory allocation techniques:1. First Fit 2.
Best Fit 3. Worst Fit
6. Discuss various disk scheduling algorithms such as FCFS, SSTF, SCAN, C-SCAN. Analyze the
advantages and disadvantages of each algorithm.
Certainly! Here’s a detailed analysis of various disk scheduling algorithms based on the current
web page:
FCFS (First-Come, First-Served):
o Advantages:
Simple to implement1.
No starvation; every request is serviced2.
o Disadvantages:3
Does not optimize seek time4.
Increases seek time.
Not efficient.
SSTF (Shortest Seek Time First):5
o Advantages:
Reduces total seek time compared to FCFS6.
Disk response time is less7.
More efficient than FCFS8.
o Disadvantages:3
Can cause starvation for some requests9.
Frequent direction switching slows down the algorithm10.
Less speed of algorithm execution11.
SCAN (Elevator Algorithm):
o Advantages:
Easy to implement1.
Requests do not have to wait in a queue12.
o Disadvantages:3
The head continues to the end even if there are no requests, wasting time 13.
C-SCAN (Circular SCAN):
o Advantages:
Uniformly distributes waiting time among requests14.
Good response time15.
o Disadvantages:3
Increased time for the disk arm to locate a spot16.
The head continues to the end of the disk17.
These algorithms are designed to manage how disk I/O (input/output) requests are serviced by
an operating system, optimizing for factors like efficiency, speed, and fairness. Each algorithm
has its own set of trade-offs between these factors.
7. Explain the structure and attributes of a file. Discuss different file types and their uses.
The structure and attributes of a file, as well as different file types and their uses, are explained
in the current web page as follows:
File Structure: A file is a data structure that stores a sequence of records1. It can be
simple like plain text or complex with special formatting2. The file system, which may
exist on a disk or in main memory, is responsible for file management and provides
mechanisms to store data and access file contents, including data and programs34.
Attributes of Files: Files have several attributes:
o Name: Recognizes the file in the system5.
o Identifier: The file extension indicating the file type (e.g., .txt for text files).
o Type: Classification of files (e.g., video, audio, text, executable)6.
o Location: Where the file is stored7.
o Size: The amount of memory space the file occupies.
o Protection: Permissions for different user groups.
o Time and Date: The last modification timestamp.
File Types: The web page mentions different file types such as video files, audio files,
text files, and executable files6. Each type serves a specific purpose:
o Video Files: For storing and playing video content.
o Audio Files: For storing and playing audio content.
o Text Files: For storing textual data, often editable with text editors.
o Executable Files: Contain code that can be executed by the computer to perform tasks
or run programs.
Uses: Files are used to organize and store data in a way that makes it accessible and
manageable. They allow users to save, retrieve, and manipulate information as needed,
supporting a wide range of applications from document editing to media playback and
software execution. The file system ensures that these files are stored efficiently and can
be accessed by the operating system and applications.
10. Discuss the importance of file system security and protection mechanisms in modern computing
environments.
The importance of file system security and protection mechanisms in modern computing
environments is highlighted by the following key points from the current web page:
Data Integrity and Confidentiality: The file system ensures the integrity and
confidentiality of data by preventing unauthorized access and modifications. It uses
protection mechanisms like permissions and encryption to safeguard data.
System Stability: Protection mechanisms help maintain system stability by controlling
access to resources, ensuring that processes and programs do not interfere with each
other1.
User Access Control: File systems provide mechanisms to define and enforce who can
access or modify files, allowing for safe sharing of resources among multiple users2.
Threat Prevention: Security measures within the file system protect against external
threats such as viruses, worms, and unauthorized intrusions, ensuring the overall safety
of the computing environment.
These aspects are crucial for maintaining the reliability, efficiency, and security of computer
systems, especially in multi-user and networked environments where data is shared and
accessed by various entities.
11. Describe common file system security threats such as malware, data breaches, and insider attacks.
Discuss strategies for mitigating these threats and enhancing file system security.
Based on the current web page, here are some key points related to file system security threats
and strategies for mitigation:
Malware Protection: The page emphasizes the importance of anti-virus and malware
protection to prevent malicious software from compromising file systems.
Access Control: It discusses the need for secure authentication and authorization to
control access to files and resources, ensuring that only authorized users can access
sensitive data.
Data Backup: Highlighting the significance of keeping a data backup, the page
suggests this as a crucial step in recovering data in case of corruption or loss due to
security incidents1.
Network Security: The use of firewalls and secure Wi-Fi is mentioned as a means to
monitor and filter network traffic, protecting the file system from unauthorized access 2.
These strategies collectively contribute to enhancing the security of file systems against
common threats like malware, data breaches, and insider attacks. It’s important to implement a
combination of these measures to ensure comprehensive protection.
12. Define RAID (Redundant Array of Independent Disks) and explain its purpose in data storage systems.
Discuss the different RAID levels (e.g., RAID 0, RAID 1, RAID 2, RAID 3, RAID 4, RAID 5 and 6).
RAID stands for Redundant Array of Independent Disks. It is a data storage virtualization
technology that combines multiple physical disk drive components into one or more logical
units for the purposes of data redundancy, performance improvement, or both. Here’s an
overview of the different RAID levels and their purposes:
RAID 0 (Striping): This level splits data evenly across two or more disks with no
redundancy. It improves performance but does not provide fault tolerance.
RAID 1 (Mirroring): Data is copied identically to two or more disks. It provides
redundancy in case one disk fails but does not improve read performance.
RAID 2: Uses error correction codes and is not commonly used today.
RAID 3 (Byte-level Striping with Dedicated Parity): Rarely used, it stripes data at the
byte level and uses a dedicated disk for parity.
RAID 4 (Block-level Striping with Dedicated Parity): Similar to RAID 3 but stripes
data at the block level.
RAID 5 (Block-level Striping with Distributed Parity): Data and parity are striped
across three or more disks. It provides good performance and fault tolerance.
RAID 6 (Block-level Striping with Double Distributed Parity): Similar to RAID 5
but with an additional parity block, allowing it to survive the failure of two disks.
Each RAID level offers a different balance of performance, storage capacity, and fault tolerance
to meet various storage needs. RAID can be implemented through either hardware or software
solutions, depending on the requirements and budget of the data storage system.
13. Consider one disk with 200 cylinders, numbered 0 to 199. Assume the current position of head is at
cylinder 66. The request queue is given as follows: 55, 32, 6, 99,58, 71, 86, 153, 11, 179, 42. Answer
for each of the following disk-scheduling algorithms: (i)First Come First Served FCFS (ii ) Shortest
Seek Time First (SSTF) (iii)SCAN (iv)C-SCAN (iv)LOOK (v)C-LOOK Count the total distance (in
cylinders) of the disk arm movement to satisfy the requests.
14. Consider an imaginary disk with 51 cylinders. A request comes in to read a block on cylinder 11.
While the seek to cylinder 11 is in progress, new requests come in for cylinders= 1, 36, 16, 34, 9, and
12, in that order. Starting from the current head position, what is the total distance (in cylinders) that the
disk arm moves to satisfy all the pending requests, for each of the following disk scheduling
Algorithms? 1. FCFS (First come first serve) 2. SSTF (Shorted seek time first) 3. SCAN 4. C-SCAN 5.
LOOK (Elevator)
6. C-LOOK.
15. Suppose the order of requests are 70, 140, 50, 125, 30, 25, 160 and the initial position of the Read-Write
head is 60. Answer for each of the following disk-scheduling algorithms: (i) First Come First Served
FCFS (ii) Shortest Seek Time First (SSTF) (iii)SCAN (iv)C-SCAN (iv)LOOK (v)C-LOOK Count the
total distance (in cylinders) of the disk arm movement to satisfy the requests.