Ponce - WorkSheet 1 - System Admin and Maintenance
Ponce - WorkSheet 1 - System Admin and Maintenance
Instructions: Carefully read each question and provide detailed answers. Where applicable,
show calculations, provide examples, and justify your responses. Use clear and concise
language, and ensure that your explanations demonstrate understanding of the concepts.
Each block can store block addresses (pointers to actual data blocks). The number of block
addresses per indirect block is:
pointers per block = blocksize / size of block number = 512 bytes / 4 bytes = 128 pointers per
block
we have:
direct block = 16
single indirect block = 128 data blocks
double indirect blocks = 128 x 128 = 16,384 data blocks, since single indirect block stores
128, we have have to double it
triple indirect block = 128 x 128 x 128 = 2,097,152 data blocks
c. How does caching within the buffer cache improve system performance?
The buffer cache stores frequently accessed data in RAM, reducing the need for slower
disk access and improving performance. It leverages locality of reference, ensuring frequently
used data remains readily available. buffer cache employs Intelligent caching algorithms that
manage data retention based on access patterns, optimizing efficiency. For write operations, it
temporarily holds modified data, enabling batch writes to minimize disk I/O overhead and
enhance performance.
3. Write-through Caching
a. Why might filesystems managing external storage devices use write-through caching (i.e.,
avoid buffering writes), even though it negatively affects performance?
Write-through cache ensures data consistency between cache and storage, reducing the
risk of data loss in case of a cache failure, and providing a reliable solution for critical data
preservation. While it may impact write performance compared to other caching strategies, it
offers the assurance that updates are immediately persisted, making it suitable for applications
where maintaining data integrity is paramount.
b. What is the equivalent mechanism in Windows, and how does it compare to flushd?
The Windows Write-Back Cache Manager performs a similar role to UNIX’s flushd. The
Windows Write-Back Cache Manager manages dirty (modified) pages in memory and regularly
writes them to disk to maintain data integrity and system performance.
But unlike UNIX, Windows employs a more dynamic method, with flushing being
triggered by a variety of system variables, including memory pressure, disk activity, and program
behavior, as opposed to relying on set time intervals.
b. Which two of these algorithms are most commonly used in practice? Justify your answer.
The First Fit and Best Fit algorithms are the most commonly used in practice due to
their efficiency and memory utilization benefits. First Fit is widely preferred because it is fast
and requires minimal computation, as it allocates the first available hole large enough for the
process. This makes it ideal for systems requiring quick memory allocation with low overhead,
though it may lead to fragmentation over time. On the other hand, Best Fit is commonly used
when memory efficiency is a priority, as it minimizes internal fragmentation by selecting the
smallest possible hole that can accommodate the process. Although it is slower than First Fit
due to the need for searching, it ensures better memory utilization, making it suitable for
systems with limited memory resources. In contrast, Next Fit does not provide significant
advantages over First Fit and can lead to inefficient allocation, while Worst Fit results in
excessive internal fragmentation by leaving large unused memory gaps. Therefore, First Fit is
preferred for its speed, while Best Fit is chosen for its optimized memory usage.
The problem arises when the producer writes to a full buffer or the consumer reads from an
empty one, leading to data loss or inefficiencies. To prevent this, the OS uses synchronization
mechanisms like semaphores and mutexes, ensuring the producer waits when the buffer is full
and the consumer waits when it’s empty.
● Advantages:
○ Implementation is easy.
○ No chance of starvation.
● Disadvantages:
○ 'Seek time' increases.
○ Not so efficient.
● Advantages:
○ In this algorithm, disk response time is less.
○ More efficient than FCFS.
● Disadvantages:
○ Less speed of algorithm execution.
○ Starvation can be seen.
● Advantages:
○ Implementation is easy.
○ Requests do not have to wait in a queue.
● Disadvantage:
○ The head keeps going on to the end even if there are no requests in that
direction.
● Advantages:
○ The waiting time is uniformly distributed among the requests.
○ Response time is good.
● Disadvantages:
○ The time taken by the disk arm to locate a spot is increased here.
○ The head keeps going to the end of the disk.