0% found this document useful (0 votes)
2 views15 pages

Operating System Assignment

The document discusses various concepts in computer systems, including page-replacement strategies, security levels, memory allocation strategies, and disk scheduling algorithms. It also covers process management, inter-process communication, and the objectives of operating systems. Additionally, it addresses issues like deadlock, thrashing, and the advantages of using semaphores for mutual exclusion.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views15 pages

Operating System Assignment

The document discusses various concepts in computer systems, including page-replacement strategies, security levels, memory allocation strategies, and disk scheduling algorithms. It also covers process management, inter-process communication, and the objectives of operating systems. Additionally, it addresses issues like deadlock, thrashing, and the advantages of using semaphores for mutual exclusion.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

Question 1

a. Discuss any four page-replacement strategies in computer memory. [8]

Least Recently Used (LRU): This strategy replaces the page that has not
been used for the longest period of time. It uses a timestamp or counters
to track page usage history.

First-In-First-Out (FIFO): The oldest page in memory is replaced first. Pages


are added to a queue, and when a replacement is needed, the page at the
front of the queue is evicted.

Optimal Page Replacement: This strategy replaces the page that will not
be used for the longest period in the future. Although it provides the best
performance, it's impractical as it requires future knowledge.

Clock Algorithm: This is a modified version of FIFO where a circular list


maintains the pages. When a page needs to be replaced, the algorithm
checks a reference bit; if the bit is set, it resets the bit and moves on to
the next page.

b. Identify the term used to describe the erroneous situation that occurs if
a program in execution references a page that does not exist. [1]

This situation is called a page fault.

c. Describe any four levels of security in a computer system. [4]

Physical Security: Protecting hardware and facilities against physical


threats such as theft or natural disasters.

Network Security: Protecting the network infrastructure from unauthorized


access and attacks, including firewalls and intrusion detection systems.

System Security: Measures to protect the operating system and


applications from malware and vulnerabilities, using antivirus software
and system updates.
Data Security: Ensuring the integrity and confidentiality of data through
encryption, access controls, and secure data storage methods.

d. Distinguish between a page and a frame. [2]

A page is a fixed-length contiguous block of virtual memory, while a frame


is the corresponding fixed-length block of physical memory. Pages are
mapped to frames during the process of memory management.

e. Identify any three memory allocation strategies. [3]

Contiguous Memory Allocation: Memory is allocated in contiguous blocks.


It simplifies access but can lead to fragmentation.

Paging: Memory is divided into fixed-size pages. This allows non-


contiguous allocation but may introduce page faults.

Segmentation: Memory is divided into variable-sized segments based on


logical divisions (like functions), which can better reflect a program
structure.

f. Distinguish between security and protection in computer systems. [2]

Security refers to the mechanisms that safeguard the system from


unauthorized access and attacks, while protection refers to the methods
implemented to control how resources are accessed and used by
legitimate users.

Question 2

a. Identify and describe four components of a computer system. [8]


Central Processing Unit (CPU): The brain of the computer; it interprets
instructions from programs and processes data.

Memory: Includes both volatile (RAM) and non-volatile (ROM) storage for
data and instructions currently in use.

Storage: Permanent storage devices like hard drives and SSDs that retain
data when the computer is powered off.

Input/Output Devices: Peripheral devices (like keyboards, mice, printers)


that allow users to interact with the computer and output results.

b. i. Calculate the waiting time for Process P3 using First-Come First-


Served. [2]

(List the burst times for the processes. Calculate the waiting time based
on the order they are executed.)

b. ii. Calculate the waiting time for Process P5 using Shortest Job First
Algorithm. [3]

(Analyze the burst times and determine the order for the Shortest Job First
approach.)

b. iii. Calculate the average time the CPU is busy executing these
processes. [2]

(Sum the burst times divided by the number of processes to determine


CPU usage.)

c. Distinguish between a threat and an attack in computer systems. [2]

A threat is a potential danger that can exploit a vulnerability, whereas an


attack is an actual unauthorized attempt to gain access to system
resources.
d. Describe any three operations on files. [3]

Create: Generating a new file in the storage system

, allowing data to be stored.

Read: Accessing and retrieving data from a file, enabling users or


processes to view the contents.

Write: Modifying or adding data to an existing file, which changes the


stored content.

Question 3

a. Name four disk-arm scheduling algorithms. Outline the basic algorithm


for each. [8]

First-Come First-Served (FCFS): Requests are processed in the order they


arrive. The disk arm moves to each request sequentially, which is simple
but can be inefficient.

Shortest Seek Time First (SSTF): Selects the request that is closest to the
current position of the disk arm. This reduces the average wait time but
can lead to starvation for distant requests.

Elevator (SCAN): The disk arm moves in one direction servicing requests
until it reaches the end of the disk and then reverses direction. This
method reduces overall seek time by ensuring both ends are serviced.

C-SCAN (Circular SCAN): Similar to SCAN, but when the arm reaches one
end, it jumps back to the other end without servicing any requests during
the return trip. This provides a more uniform wait time.

b. Why is it generally correct to favor I/O bound processes over CPU-bound


processes? [2]
I/O bound processes are often waiting for input/output operations to
complete, which means favoring them can keep the CPU busy with work
instead of letting it sit idle, thus improving overall system efficiency.

c. What is the difference between pre-emptive scheduling and non-


preemptive scheduling? What is the issue with the latter? [4]

Pre-emptive scheduling allows a process to be interrupted and moved to


the back of the queue for a higher priority process to execute, while non-
preemptive scheduling does not allow interruption once a process has
started execution. The issue with non-preemptive scheduling is that it can
lead to starvation of lower-priority processes if higher priority processes
continuously enter the queue.

d. Describe two virtual memory page fetch policies. Which is less common
in practice? Why? [2]

Demand Paging: Pages are loaded into memory only when they are
needed. This reduces memory usage but can lead to higher page fault
rates.

Pre-paging: Pages are loaded into memory preemptively based on


predicted usage. This can decrease the number of page faults but might
waste memory by loading unnecessary pages.

Demand paging is more common due to its efficiency in memory


utilization compared to pre-paging, which may load other pages that are
not needed immediately.

e. Enumerate some pros and cons for increasing the page size. [2]

Pros:

Reduced page table size, as fewer pages are needed for a given memory
size.
Less internal fragmentation for larger processes that utilize page space
efficiently.

Cons:

Increased risk of page faults if processes need data not contiguous with
other necessary pages.

Greater internal fragmentation since larger pages may leave more unused
space in the allocated frames.

Question 4

a. What are the objectives of operating system? [4]

Resource Management: Efficiently and fairly managing resources (CPU,


memory, I/O devices) among different processes.

Process Management: Controlling the execution of processes, including


scheduling, synchronization, and communication.

Security and Protection: Ensuring unauthorized users cannot access


system resources and safeguarding data integrity.

User Interface: Providing a user-friendly interface for interacting with


system resources, either via command-line or graphical interfaces.

b. How does timesharing differ from multiprogramming? If so, how? [4]

Timesharing allows multiple users to interact with a computer


concurrently by rapidly switching between processes to give the illusion of
simultaneous execution, while multiprogramming focuses on maximizing
CPU utilization by running multiple processes without user interaction but
does not guarantee response times for individual users.

c. Discuss the main advantages of layered approach to system design? [4]


Modularity: Allows for easier development and maintenance since changes
in one layer don’t necessarily affect others.

Abstraction: Layers provide a clear abstraction, helping to conceal


complexities from users and other layers.

Improved Debugging and Testing: Isolating each layer enables targeted


testing and debugging.

Flexibility: Layers can be replaced or modified independently, enhancing


adaptable system design.

d. Define inter-process communication. [1]

Inter-process communication (IPC) refers to mechanisms that allow


processes to communicate and synchronize their actions when running
concurrently, using methods such as message passing, shared memory, or
signals.

e. What are the three main purposes of an operating system? [3]

Managing Hardware Resources: Overseeing hardware components like


CPU, memory, and I/O devices, acting as an intermediary between user
applications and hardware.

Process Management: Handling the creation, scheduling, and termination


of processes, ensuring they have sufficient resources for execution.

File Management: Organizing data storage, providing a means to create,


read, write, and delete files, while also managing access and permissions.

Question 5

a. What are the three major activities of an operating system in regard to


secondary-storage management? [3]
Disk Scheduling: Managing the order of disk access requests to optimize
performance.

Disk Formatting: Preparing the disk for storage by setting up a file system
and organizing data blocks.

Storage Allocation: Assigning and managing space on the disk for different
files and applications.

b. Provide definitions for the following terms:

i. Process: An instance of a program in execution, encompassing the


program code, current activity, and allocated resources.

ii. Process State: The current status of a process, including states like
running, waiting, ready, or terminated.

iii. Multiprogramming: A method where multiple processes are loaded into


memory and executed concurrently to maximize CPU utilization.

iv. Time Sharing: A technique where multiple users share system


resources by dividing CPU time and giving each user a time slice.

c. Can a process make a transition from the ready state to the blocked
state? Why or why not? [2]

No, a process cannot transition directly from the ready state to the
blocked state. It must first be selected for execution and then encounter a
situation (such as waiting for I/O) that causes it to become blocked.

d. In a round-robin scheduling, new processes are placed at the end of the


queue rather than at the beginning. Suggest a reason for this. [5]

New processes are placed at the end to ensure fairness and prevent
starvation. This approach guarantees that each process receives a fair
amount of CPU time in a cyclic order, promoting a balanced distribution of
processing time among all processes.

e. What is the distinction between user mode and kernel mode? [1]

User mode is a restricted mode where applications run without direct


access to hardware or critical system resources, while kernel mode allows
the operating system to execute privileged instructions.

f. Define the notion of deadlock. What is the difference between a


deadlock and an unsafe state? [2]

A deadlock occurs when two or more processes are unable to proceed


because each is waiting for the other to release resources. An unsafe state
is a condition where the system may lead to a deadlock, but it is not
current; resources could still be allocated in a way that avoids deadlock.

g. Assume that n processes apply the fully distributed approach for mutual
exclusion in a distributed system. How many messages are required to
enter a critical section? Explain your answer. [3]

In a fully distributed mutual exclusion algorithm, each process needs to


send a request message to all other processes, resulting in n-1 messages
for each process to request access. Therefore, to enter the critical section,
it requires n-1 messages to be sent and received for the mutual exclusion
algorithm to succeed.

Question 6

a. In a batch system, there are five jobs A to E with run times 2, 4, 1, 1, 1


seconds respectively. Their arrival times are 0, 0, 3, 3, 3 seconds. What is
the turnaround time using the shortest-job-first scheduling algorithm? Is
this the optimal turnaround time among the non-pre-emptive runs? [5]

Calculate the turnaround time for each job based on their order of
execution when using the shortest-job-first scheduling. Compare this with
other methods to determine optimality.

b. Consider the Dining Philosophers problem with n philosophers but with


n + 1 forks. Is deadlock possible? Explain your answer. [3]

Deadlock may not necessarily occur in this scenario since having one
additional fork allows at least one philosopher to eat while others may still
compete, thereby breaking the cycle of waiting.

c. Explain how access control lists (ACLs) and capability lists are used as
protection mechanisms. [5]

ACLs list what permissions users or groups have on each object, while
capability lists provide users their own list of what they can access and
the actions they can perfor This approach helps enforce security by
ensuring that users can only access resources for which they have explicit
permissions, thus managing and controlling access to critical system
resources efficiently.

d. Explain the difference between internal and external fragmentation.


Why should they be avoided? [3]

Internal fragmentation occurs when memory blocks are allocated, but not
all of the allocated space is used, leading to wasted space within the
blocks. External fragmentation happens when free memory is split into
small, non-contiguous blocks, making it difficult to allocate larger blocks to
new processes. Both types of fragmentation decrease memory utilization
and can lead to increased overhead and wasted resources.
e. Explain two advantages of using binary semaphores to achieve mutual
exclusion among several processes over Peterson’s solution. [4]

Simplicity: Binary semaphores provide a straightforward mechanism for


ensuring mutual exclusion, reducing complexity in implementation
compared to the algorithmic nature of Peterson’s solution.

Performance: Binary semaphores can be managed by the operating


system, allowing for potentially better performance through system-
managed scheduling and resource allocation rather than busy-waiting.

Question 7

a. Consider the following sequence of disk track requests 27, 129, 110,
186, 147, 41, 10, 64, 120. Assume initially, the head is at track 30 and
moving in the direction of decreasing track number. Compute the number
of tracks the head traverses using FIFO, SSTF, and Elevator algorithms. [5]

FIFO: Calculate the total distance traversed sequentially through the track
requests.

SSTF: Compute the distance based on servicing the nearest request each
time.

Elevator: Consider the pattern of servicing requests in one direction and


then reversing at the end.

b. Explain how monitors are used in inter-process communication. [6]

Monitors are high-level synchronization constructs that allow processes to


share resources without conflict. They encapsulate shared data and the
operations that can be performed on it, regulating access through
condition variables that enable processes to wait until certain conditions
are met before accessing the shared data, ensuring mutual exclusion and
facilitating effective IPC.
c. A small computer has four page frames, and a process makes the
following list of page references: 1, 2, 3, 4, 1, 5, 2, 3, 1, 2. How many page
faults occur using FIFO, second chance, and least recently used page
replacement algorithms? [6]

Keep track of page faults for each page replacement strategy by


simulating each approach step-by-step with the given reference string.

d. What is Belady's anomaly? [2]

Belady's anomaly refers to the counterintuitive situation where increasing


the number of page frames for a process results in an increase in the
number of page faults, contradicting the expectation that having more
frames should always reduce page faults.

Question 8

a. Consider the following two statements about IPC:

i. A starvation-free solution is also deadlock-free.

ii. A deadlock-free solution is starvation-free.

Explain their validity and any exceptions that may apply in certain
circumstances. Generally, the first statement is true, while the second is
false; it is possible to have a deadlock-free solution that still allows for
starvation.

b. In the solution for the bounded buffer problem, two counting


semaphores (empty and full) are used. Explain why it is better to use
them than keeping track of the empty slots by using an ordinary variable.
[6]
Counting semaphores adequately manage the state of the buffer in a
concurrent environment, ensuring that processes properly wait for empty
or full slots without risking race conditions or inaccuracies that ordinary
variables can't handle.

c. What is thrashing? How might it be detected? How might one recover


from it once detected? [5]

Thrashing occurs when a system spends more time paging than executing
processes due to insufficient memory. Detection can be done by
monitoring page fault rates. Recovery involves increasing physical
memory, decreasing the number of active processes, or optimizing
workload to reduce memory pressure.

d. How can buffering improve the performance of a computer system? [1]

Buffering can enhance performance by temporarily holding data in


memory during input or output operations, allowing for smoother
processing and reducing the wait times for both the CPU and the I/O
devices.

e. Describe, in order, the steps taken by a kernel to context-switch


between processes on a single-core processor. [2]

Save the state of the currently running process, including its CPU registers
and program counter.

Update the process control block (PCB) of the current process to reflect its
new state (e.g., from running to ready). 3. Select the next process to run
from the ready queue based on the scheduling algorithm. 4. Load the
state of the selected process from its PCB, restoring its CPU registers and
program counter. 5. Transfer control to the selected process, allowing it to
resume execution.
Question 9

a. What are the five major activities of an operating system in regard to


file management? [8]

File Creation: The operating system allows users to create new files and
directories in the file system.

File Deletion: It provides the ability to remove files from the file system
when they are no longer needed.

File Manipulation: This includes reading, writing, and modifying files,


allowing users to interact with the data stored.

File Access Control: The OS manages permissions, defining which users


can access or modify certain files, ensuring data security.

File System Integrity: The operating system ensures that the file system
remains organized and error-free, managing fragmentation and keeping
track of free and used space.

b. What are the CPU scheduling algorithm criteria? [4]

CPU Utilization: Keeping the CPU as busy as possible, maximizing


throughput.

Throughput: The number of processes that complete their execution in a


given time frame.

Turnaround Time: The total time taken to execute a process from


submission to completion, aiming for minimal values.

Waiting Time: The total time a process spends waiting in the queue,
minimizing delays is critical for user satisfaction.

c. Given six memory partitions of 300 KB, 600 KB, 350 KB, 200 KB, 750
KB, and 125 KB (in order). How would the following algorithms place
processes of size 115 KB, 500 KB, 358 KB, 200 KB, and 375 KB (in order)?

i. First-fit: Place the first process into the first partition that is large
enough.
Result: Track assignment details for each process based on available
partitions.

ii. Best-fit: Allocate the smallest free partition large enough to


accommodate each process.

Result: Apply best-fit allocation logic for maximum space efficiency.

You might also like