0% found this document useful (0 votes)
6 views7 pages

Ponce - WorkSheet 1 - System Admin and Maintenance

The document provides a comprehensive overview of various concepts related to system administration and maintenance, including file systems, buffering, caching, memory allocation algorithms, page-based virtual memory, and disk scheduling algorithms. It explains the calculations for maximum file size, the importance of buffering in file systems, and details different memory allocation strategies and their performance implications. Additionally, it discusses page replacement algorithms and their effectiveness, as well as the producer-consumer problem in I/O and various disk scheduling algorithms with their advantages and disadvantages.

Uploaded by

lizyjoy.ponce
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views7 pages

Ponce - WorkSheet 1 - System Admin and Maintenance

The document provides a comprehensive overview of various concepts related to system administration and maintenance, including file systems, buffering, caching, memory allocation algorithms, page-based virtual memory, and disk scheduling algorithms. It explains the calculations for maximum file size, the importance of buffering in file systems, and details different memory allocation strategies and their performance implications. Additionally, it discusses page replacement algorithms and their effectiveness, as well as the producer-consumer problem in I/O and various disk scheduling algorithms with their advantages and disadvantages.

Uploaded by

lizyjoy.ponce
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

West Visayas State University

College of Information and Communications Technology

System Administration and Maintenance Worksheet

Instructions: Carefully read each question and provide detailed answers. Where applicable,
show calculations, provide examples, and justify your responses. Use clear and concise
language, and ensure that your explanations demonstrate understanding of the concepts.

1. File System and Storage


a. What is the maximum file size supported by a file system that has 16 direct blocks, single,
double, and triple indirection? The block size is 512 bytes, and the disk block number can be
stored in 4 bytes. Show your calculations.
Given:
Block Size = 512 bytes
Disc Block Number Size = 4 bytes
Direct Blocks = 16
Single indirect, double indirect, triple indirect

Each block can store block addresses (pointers to actual data blocks). The number of block
addresses per indirect block is:
pointers per block = blocksize / size of block number = 512 bytes / 4 bytes = 128 pointers per
block

we have:
direct block = 16
single indirect block = 128 data blocks
double indirect blocks = 128 x 128 = 16,384 data blocks, since single indirect block stores
128, we have have to double it
triple indirect block = 128 x 128 x 128 = 2,097,152 data blocks

adding up all the blocks:


total blocks = 16 + 128 + 16,384 + 2,097,152 = 2,113,680 blocks

Calculating Maximum File Size


each block can store 512 bytes of data, so the maximum file size is:
Max file size = total blocks x block size = 2,113,680 x 512 =1,082,204,160 or 1.008 GB

2. Filesystem Buffer Cache


a. Explain why buffering is needed in a filesystem buffer cache.
​ File System buffering helps with backpressure and overall memory control. Buffering is
necessary in file systems to improve efficiency and performance by reducing the frequency of
slow disk accesses. It allows for larger data transfers, minimizes fragmentation, and manages
variations in data flow effectively. As a result, buffering significantly enhances overall system
performance.

b. How does buffering improve performance? Discuss any potential drawbacks.


Use of the buffer cache reduces the amount of disk traffic, thereby increasing overall
system throughput and decreasing response time.Buffering improves performance by
temporarily storing data in RAM, which is much faster than disk access. It helps manage bursts
of requests, preventing data loss by queuing excess write requests instead of dropping them.
For reads, buffering caches frequently requested data, reducing redundant disk access and
increasing throughput based on the cache hit ratio. However, buffering has drawbacks, including
increased memory usage and the risk of data loss if a system crashes before buffered data is
written to disk.

c. How does caching within the buffer cache improve system performance?
​ The buffer cache stores frequently accessed data in RAM, reducing the need for slower
disk access and improving performance. It leverages locality of reference, ensuring frequently
used data remains readily available. buffer cache employs Intelligent caching algorithms that
manage data retention based on access patterns, optimizing efficiency. For write operations, it
temporarily holds modified data, enabling batch writes to minimize disk I/O overhead and
enhance performance.

3. Write-through Caching
a. Why might filesystems managing external storage devices use write-through caching (i.e.,
avoid buffering writes), even though it negatively affects performance?
Write-through cache ensures data consistency between cache and storage, reducing the
risk of data loss in case of a cache failure, and providing a reliable solution for critical data
preservation. While it may impact write performance compared to other caching strategies, it
offers the assurance that updates are immediately persisted, making it suitable for applications
where maintaining data integrity is paramount.

4. System Flush Mechanisms


a. What is the role of flushd in a UNIX system?
​ In a UNIX system, the flush mechanism ensures that modified data in memory (buffers
and caches) is written to disk, maintaining data integrity and consistency. Flush in UNIX ensures
data integrity by periodically writing modified pages from the buffer cache to disk, preventing
data loss in case of a crash. It also minimizes corruption risks by safely storing uncommitted
writes and optimizes performance through write-back caching, batching small writes into larger,
more efficient ones.

b. What is the equivalent mechanism in Windows, and how does it compare to flushd?
​ The Windows Write-Back Cache Manager performs a similar role to UNIX’s flushd. The
Windows Write-Back Cache Manager manages dirty (modified) pages in memory and regularly
writes them to disk to maintain data integrity and system performance.

But unlike UNIX, Windows employs a more dynamic method, with flushing being
triggered by a variety of system variables, including memory pressure, disk activity, and program
behavior, as opposed to relying on set time intervals.

5. Memory Allocation Algorithms

a. List and describe the four main memory allocation algorithms.


1.​ First Fit Algorithm
-​ In this strategy, memory is scanned from the beginning and searched for the hole
that is large enough to accommodate the process. As soon as the first hole is
identified, it is allocated to the process. This strategy is less time consuming as it
searches only for the first hole that has the size greater than or equal to the
requirement of the process, not for the entire set of holes. The degree of internal
fragmentation depends on the difference of the size between the hole and the
process. If the process size is much smaller than the hole, then due to internal
fragmentation, considerable amount of memory will be wasted

2.​ Next Fit Algorithm


-​ Next Fit is a modified version of the First Fit algorithm. Instead of starting the
search from the beginning of the memory each time, it continues searching from
the last allocated hole.

3.​ Best Fit Algorithm


-​ In this strategy, memory is scanned from the beginning and searched for the set
of holes that have the size equal to or greater than the size of the process. The
smallest hole that is big enough to accommodate the process is allocated. This
strategy results in much better memory utilization than first fit. However, due to
searching for the required set of holes, it becomes a time consuming procedure.
But, this effort pays in lesser internal fragmentation than first fit

4.​ Worst Fit Algorithm


-​ In this strategy, memory is scanned from the beginning and searched for the
entire set of holes. Now, the hole that has the largest size among them is
allocated to the process. This strategy generates the largest leftover hole, which
may be sufficient enough to accommodate another process. Due to this, the
degree of internal fragmentation is much higher than first fit and best fit strategies
that result in very poor memory management. Also, this procedure becomes time
consuming as it has to search the largest hole among the set of holes

b. Which two of these algorithms are most commonly used in practice? Justify your answer.
​ The First Fit and Best Fit algorithms are the most commonly used in practice due to
their efficiency and memory utilization benefits. First Fit is widely preferred because it is fast
and requires minimal computation, as it allocates the first available hole large enough for the
process. This makes it ideal for systems requiring quick memory allocation with low overhead,
though it may lead to fragmentation over time. On the other hand, Best Fit is commonly used
when memory efficiency is a priority, as it minimizes internal fragmentation by selecting the
smallest possible hole that can accommodate the process. Although it is slower than First Fit
due to the need for searching, it ensures better memory utilization, making it suitable for
systems with limited memory resources. In contrast, Next Fit does not provide significant
advantages over First Fit and can lead to inefficient allocation, while Worst Fit results in
excessive internal fragmentation by leaving large unused memory gaps. Therefore, First Fit is
preferred for its speed, while Best Fit is chosen for its optimized memory usage.

6. Page-Based Virtual Memory


a. Describe page-based virtual memory, including the roles of pages, frames, page tables, and
the Memory Management Unit (MMU).

​ Page-based virtual memory is a memory management technique used by operating
systems to give the appearance of a large, continuous block of memory to applications, even if
the physical memory (RAM) is limited. It allows larger applications to run on systems with less
RAM.
A page is a fixed-size block of data in virtual memory. The operating system divides a
program’s memory into pages, which can be loaded into RAM as needed. A frame is a
fixed-size block of physical memory (RAM) where pages are loaded. Frames and pages are of
equal size, allowing seamless mapping between virtual and physical memory. The page table,
maintained by the OS, tracks virtual-to-physical memory mappings and handles page faults
when data must be loaded from disk. The Memory Management Unit (MMU), built into the CPU,
translates virtual addresses into physical addresses. This system optimizes RAM usage,
supports multitasking, and enables efficient memory management.

7. Advantages of Page-Based Virtual Memory


a. What are the advantages of a system with page-based virtual memory compared to a system
that only uses base and limit registers with swapping?
​ Page-based virtual memory offers better memory utilization by allowing non-contiguous
allocation, reducing fragmentation, while base and limit registers require contiguous memory,
leading to inefficiency. Faster execution is achieved as paging loads only necessary pages,
whereas swapping loads entire processes, increasing delays. Lower swapping overhead in
paging minimizes disk I/O, whereas swapping entire processes slows performance. Improved
multitasking is possible with paging, as multiple processes share RAM efficiently, unlike
swapping, which limits concurrency. Simplified memory management in paging allows dynamic
allocation, whereas base and limit registers require manual handling. Better protection and
isolation prevent processes from accessing each other’s memory, unlike fixed address ranges in
base-limit systems.

8. Page Replacement Algorithms


a. Name and describe four page replacement algorithms.
●​ First In First Out (FIFO)
○​ The FIFO algorithm is the simplest of all the page replacement algorithms. In this,
we maintain a queue of all the pages that are in the memory currently. The oldest
page in the memory is at the front end of the queue and the most recent page is
at the back or rear end of the queue.
○​ Whenever a page fault occurs, the operating system looks at the front end of the
queue to know the page to be replaced by the newly requested page. It also adds
this newly requested page at the rear end and removes the oldest page from the
front end of the queue.

●​ Optimal Page Replacement in OS


○​ Optimal page replacement is the best page replacement algorithm as this
algorithm results in the least number of page faults. In this algorithm, the pages
are replaced with the ones that will not be used for the longest duration of time in
the future. In simple terms, the pages that will be referred to farthest in the future
are replaced in this algorithm.

●​ Least Recently Used (LRU) Page Replacement Algorithm


○​ The least recently used page replacement algorithm keeps the track of usage of
pages over a period of time. This algorithm works on the basis of the principle of
locality of a reference which states that a program has a tendency to access the
same set of memory locations repetitively over a short period of time. So pages
that have been used heavily in the past are most likely to be used heavily in the
future also.
○​ In this algorithm, when a page fault occurs, then the page that has not been used
for the longest duration of time is replaced by the newly requested page.

●​ Last In First Out (LIFO) Page Replacement Algorithm


○​ This is the Last in First Out algorithm and works on LIFO principles. In this
algorithm, the newest page is replaced by the requested page. Usually, this is
done through a stack, where we maintain a stack of pages currently in the
memory with the newest page being at the top. Whenever a page fault occurs,
the page at the top of the stack is replaced.

b. Compare these algorithms critically in terms of performance and implementation complexity.


​ The First In First Out (FIFO) algorithm has poor performance because it does not
consider page usage frequency, leading to frequent page faults. It also suffers from Belady’s
anomaly, where increasing memory frames can unexpectedly increase page faults. However,
its implementation is simple, requiring only a queue to track pages. The Optimal Page
Replacement algorithm provides the best performance by replacing the page that will not be
needed for the longest time, resulting in the least number of page faults. However, its
implementation is highly complex as it requires future knowledge of page accesses, making it
impractical for real-world systems. The Least Recently Used (LRU) algorithm performs better
than FIFO by considering recent usage patterns, avoiding Belady’s anomaly, and working well
with the locality of reference principle. Despite its advantages, LRU has a high implementation
complexity since it requires additional data structures such as linked lists or counters, as well
as hardware support to track page access history. The Last In First Out (LIFO) algorithm
performs poorly because it removes the most recently added page, often leading to frequent
page faults and inefficient memory usage. While it has a simple implementation using a stack,
its poor memory management makes it impractical. In conclusion, Optimal Page Replacement
is theoretically the best but infeasible, LRU provides the best balance between performance and
practicality despite its complexity, FIFO is easy to implement but inefficient, and LIFO is both
simple and ineffective. Consequently, LRU is the preferred choice in real-world applications
due to its ability to reduce page faults while remaining implementable with hardware support.

9. Producer-Consumer Problem in I/O


a. Explain how the producer-consumer problem is relevant to operating system I/O.
​ The producer-consumer problem models how OS I/O manages data transfer between
processes and hardware. In this scenario, the producer writes data (e.g., disk writes, network
sends), while the consumer reads it (e.g., file reads, network receives), using a fixed-size buffer
to store data temporarily.

The problem arises when the producer writes to a full buffer or the consumer reads from an
empty one, leading to data loss or inefficiencies. To prevent this, the OS uses synchronization
mechanisms like semaphores and mutexes, ensuring the producer waits when the buffer is full
and the consumer waits when it’s empty.

This synchronization is crucial in disk I/O, network communication, and inter-process


communication, optimizing data flow and preventing race conditions.
10. Disk Scheduling Algorithms
a. Name four disk-arm scheduling algorithms.
1.​ FCFS ‘first-come-first-serve’ disk scheduling algorithm
2.​ SSTF ‘Shortest seek time first’ disk scheduling algorithm
3.​ SCAN disk scheduling algorithm
4.​ C-SCAN disk scheduling algorithm
b. Outline the basic algorithm for each and discuss their advantages and disadvantages.
●​ FCFS disk scheduling algorithm
○​ It stands for 'first-come-first-serve'. As the name suggests, the request that
comes first will be processed first and so on. The requests coming to the disk are
arranged in a proper sequence as they arrive. Since every request is processed
in this algorithm, there is no chance of 'starvation'.

●​ Advantages:
○​ Implementation is easy.
○​ No chance of starvation.

●​ Disadvantages:
○​ 'Seek time' increases.
○​ Not so efficient.

●​ SSTF disk scheduling algorithm-


○​ It stands for 'Shortest seek time first'. As the name suggests, it searches for the
request having the least 'seek time' and executes them first. This algorithm has
less 'seek time' as compared to the FCFS Algorithm.

●​ Advantages:
○​ In this algorithm, disk response time is less.
○​ More efficient than FCFS.

●​ Disadvantages:
○​ Less speed of algorithm execution.
○​ Starvation can be seen.

●​ SCAN disk scheduling algorithm:


○​ In this algorithm, the head starts to scan all the requests in a direction and
reaches the end of the disk. After that, it reverses its direction and starts to scan
again the requests in its path and serves them. Due to this feature, this algorithm
is also known as the "Elevator Algorithm".

●​ Advantages:
○​ Implementation is easy.
○​ Requests do not have to wait in a queue.
●​ Disadvantage:
○​ The head keeps going on to the end even if there are no requests in that
direction.

●​ C-SCAN disk scheduling algorithm:


○​ It stands for "Circular-Scan". This algorithm is almost the same as the Scan disk
algorithm but one thing that makes it different is that 'after reaching the one end
and reversing the head direction, it starts to come back. The disk arm moves
toward the end of the disk and serves the requests coming into its path.
○​ After reaching the end of the disk it reverses its direction and again starts to
move to the other end of the disk but while going back it does not serve any
requests.

●​ Advantages:
○​ The waiting time is uniformly distributed among the requests.
○​ Response time is good.

●​ Disadvantages:
○​ The time taken by the disk arm to locate a spot is increased here.
○​ The head keeps going to the end of the disk.

You might also like