Practice Exercises in OS
Practice Exercises in OS
1|Page
algorithms for each queue, the criteria used to move processes between queues,
and so on. These algorithms are thus really sets of algorithms (for example, the
set of RR algorithms for all time slices, and so on). One set of algorithms may
include another (for example, the FCFS algorithm is the RR algorithm with an
infinite time quantum). What (if any) relation holds between the following pairs of
algorithm sets?
a. Priority and SJF
b. Multilevel feedback queues and FCFS
c. Priority and FCFS
d. RR and SJF
- a. The shortest job has the highest priority.
- b. The lowest level of MLFQ is FCFS.
- c. FCFS gives the highest priority to the job having been in existence the
longest.
- d. None
6. Suppose that a scheduling algorithm (at the level of short-term CPU scheduling)
favors those processes that have used the least processor time in the recent
past. Why will this algorithm favor I/O-bound programs and yet not permanently
starve CPU-bound programs?
- It will favor the I/O-bound programs because of the relatively short CPU burst
request by them; however, the CPU-bound programs will not starve because
the I/O-bound programs will relinquish the CPU relatively often to do their I/O.
7. Distinguish between PCS and SCS scheduling.
- PCS scheduling is done local to the process. It is how the thread library
schedules threads onto available LWPs. SCS scheduling is the situation
where the operating system schedules kernel threads. On systems using
either many- to-one or many-to-many, the two scheduling models are
fundamentally different. On systems using one-to-one, PCS and SCS are the
same.
8. Assume that an operating system maps user-level threads to the kernel using the
many-to-many model and that the mapping is done through the use of LWPs.
Furthermore, the system allows program developers to create real-time threads.
Is it necessary to bind a real-time thread to an LWP?
- Yes, otherwise a user thread may have to compete for an available LWP prior
to being actually scheduled. By binding the user thread to an LWP, there is no
latency while waiting for an available LWP; the real-time user thread can be
scheduled immediately
9. The traditional UNIX scheduler enforces an inverse relationship between priority
numbers and priorities: the higher the number, the lower the priority. The
scheduler recalculates process priorities once per second using the following
function: Priority = (recent CPU usage / 2) + base where base = 60 and recent
CPU usage refers to a value indicating how often a process has used the CPU
since priorities were last recalculated. Assume that recent CPU usage for
process P1 is 40, for process P2 is 18, and for process P3 is 10. What will be the
new priorities for these three processes when priorities are recalculated? Based
on this information, does the traditional UNIX scheduler raise or lower the relative
priority of a CPU-bound process?
2|Page
- The priorities assigned to the processes are 80, 69, and 65 respectively. The
scheduler lowers the relative priority of CPU-bound processes.
10. Why is it important for the scheduler to distinguish I/O-bound programs?
- I/O-bound programs have the property of performing only a small amount of
computation before performing I/O. Such programs typically do not use up
their entire CPU quantum. CPU-bound programs, on the other hand, use their
entire quantum without performing any blocking I/O operations. Consequently,
one could make better use of the computer's resources by giving higher
priority to I/O-bound programs and allow them to execute ahead of the CPU-
bound programs.
11. Discuss how the following pairs of scheduling criteria conflict in certain settings.
A. CPU utilization and response time
B. Average turnaround time and maximum waiting time
C. I/O device utilization and CPU utilization
- A. CPU utilization and response time: CPU utilization is increased if the
overheads associated with context switching is minimized. The context
switching overheads could be lowered by performing context switches
infrequently. This could, however, result in increasing the response time for
processes.
- B. Average turnaround time and maximum waiting time: Average turnaround
time is minimized by executing the shortest tasks first. Such a scheduling
policy could, however, starve long-running tasks and thereby increase their
waiting time.
- C. I/O device utilization and CPU utilization: CPU utilization is maximized by
running long-running CPU-bound tasks without performing context switches.
I/O device utilization is maximized by scheduling I/O-bound jobs as soon as
they become ready to run, thereby incurring the overheads of context
switches.
12. One technique for implementing lottery scheduling works by assigning processes
lottery tickets, which are used for allocating CPU time. Whenever a scheduling
decision has to be made, a lottery ticket is chosen at random, and the process
holding that ticket gets the CPU. The BTV operating system implements lottery
scheduling by holding a lottery 50 times each second, with each lottery winner
getting 20 milliseconds of scheduler can ensure that higher-priority threads
receive more attention CPU time (20 milliseconds x 50 = 1 second). Describe
how the BTV scheduler can ensure that higher-priority threads receive more
attention from the CPU than lower-priority threads.
- By assigning more lottery tickets to higher-priority processes.
13. Most scheduling algorithms maintain a run queue, which list processes eligible to
run on a processor. On multicore systems, there are two general options: (1)
each processing core has its own run queue, or (2) a single run queue is shared
by all processing cores. What are the advantages and disadvantages of each of
these approaches?
- The primary advantage of each processing core having its own run queue is
that there is no contention over a single run queue when the scheduler is
running concurrently on 2 or more processors. When a scheduling decision
3|Page
must be made for a processing core, the scheduler only need to look no
further than its private run queue.
- A disadvantage of a single run queue is that it must be protected with locks to
prevent a race condition and a processing core may be available to run a
thread, yet it must first acquire the lock to retrieve the thread from the single
queue. However, load balancing would likely not be an issue with a single run
queue, whereas when each processing core has its own run queue, there
must be some sort of load balancing between the different run queues.
14. Consider the exponential average formula used to predict the length of the next
CPU burst. What are the implications of assigning the following values to the
parameters used by the algorithm?
a. α = 0 and τ0 = 100 milliseconds
b. α = 0.99 and τ0 = 10 milliseconds
- a. When α = 0 and τ0 = 100 milliseconds, the formula always makes a
prediction of 100 milliseconds for the next CPU burst.
- b. When α = 0.99 and τ0 = 10 milliseconds, the most recent behavior of the
process is given much higher weight than the past history associated with the
process. Consequently, the scheduling algorithm is almost memoryless, and
simply predicts the length of the previous burst for the next quantum of CPU
execution
15. A variation of the round-robin scheduler is the regressive round-robin scheduler.
This scheduler assigns each process a time quantum and a priority. The initial
value of a time quantum is 50 milliseconds. However, every time a process has
been allocated the CPU and uses its entire time of quantum (does not block for
I/O). 10 milliseconds is added to its time quantum, and its priority level is
boosted. (The time quantum for a process can be increased to a maximum of
100 milliseconds.) When a process blocks before using its entire time quantum,
its time quantum is reduced by 5 milliseconds, but its priority remains the same.
What type of process (CPU-bound or I/O bound) does the regressive round-robin
scheduler favour? Explain.
- This scheduler would favor CPU-bound processes as they are rewarded with
a longer time quantum as well as priority boost whenever they consume an
entire time quantum. This scheduler does not penalize I/O-bound processes
as they are likely to block for I/O before consuming their entire time quantum,
but their priority remains the same
16. Consider the following set of processes, with the length of the CPU burst given in
milliseconds:
Process Burst Time Priority
P1 2 2
P2 1 1
P3 8 4
P4 4 2
P5 5 3
The process are assumed to have arrived in the order P1, P2, P3, P4, P5 all at
time 0.
4|Page
a. Draw four Gantt charts that illustrates the execution of these processes using
the following scheduling algorithms: FCFS, SJF, nonpreemptive priority (a
smaller priority number implies a higher priority), and RR (quantum = 1).
b. What is the turnaround time of each process for each of the scheduling
algorithms in part a?
c. What is the waiting time of each process for each of these scheduling
algorithms?
d. Which of the algorithms results in the minimum average waiting time (over all
processes)?
- a. The four Gantt charts are
FCFS
1 2 3 4 5
RR
1 2 3 4 5 1 3 5 1 5 1 5 1 5 1
SJF
2 4 3 5 1
Nonpreemptive Priority
5 2 1 3 4
- b. Turnaround time
FCFS RR SJF Priority
P1 10 19 19 16
P2 11 22 1 1
P3 13 7 4 18
P4 14 4 2 19
P5 19 14 9 6
- c. Waiting time (turnaround time minus burst time)
FCFS RR SJF Priority
P1 0 9 9 6
P2 10 1 0 0
P3 11 5 2 16
P4 13 3 1 18
P5 14 9 4 1
- d. Shortest Job First
17. The following processes are being scheduled using a preemptive, round robin
scheduling algorithm. Each process is assigned a numerical priority, with a
higher number indicating a higher relative priority. In addition to the processes
listed below, the system also has an idle task (which consumes no CPU
resources and is identified as Pidle). This task has priority o and is scheduled
whenever the system has no other available processes to run. The length of a
time quantum is 10 units. If a process is pre-empted by a higher priority process,
the pre-empted process is placed at the end of the queue.
- A. Gantt Chart
1 2 3 4 5 6
5|Page
- B. P1: 20-0 = 20, P2: 80-25 = 55, P3: 90-30 = 60, P4: 75-60 = 15, P5:120-100
= 20, P6: 115-105 = 10
- C. 1 P1: 0, P2: 40, P3: 35, P4: 0, P5: 10, P6: 0
- D. 105/120= 87.5 percent
18. The nice command is used to set the nice value of process on Linux, as well as
on other UNIX systems. Explain why some systems may allow any user to assign
a process a nice value >= 0 yet allow only the root user to assign nice values < 0.
- Nice values < 0 are assigned a higher relative priority and such systems may
not allow non-root processes to assign themselves higher priorities.
19. Which of the following scheduling algorithms could result in starvation?
A. First- come, first-served
B. Shortest job first
C. Round robin
D. Priority
- Shortest job first and priority-based scheduling algorithms could result in
starvation
20. Consider a variant of the RR scheduling algorithm in which the entries in the
ready queue are pointers to the PCBs.
a. What would be the effect of putting two pointers to the same process in the
ready queue?
b. What would be two major advantages and two disadvantages of this scheme?
c. How would you modify the basic RR algorithm to achieve the same effect
without the duplicate pointers?
- a. In effect, that process will have increased its priority since by getting time
more often it is receiving preferential treatment.
- b. The advantage is that more important jobs could be given more time, in
other words, higher priority in treatment. The consequence, of course, is that
shorter jobs will suffer.
- c. Allot a longer amount of time to processes deserving higher priority. In
other words, have two or more quantums possible in the Round-Robin
scheme
21. Consider a system running ten I/O-bound tasks and one CPU-bound task.
Assume that the I/O-bound tasks issue an I/O operation once for every
millisecond of CPU computing and that each I/O operation takes 10 milliseconds
to complete. Also assume that the context-switching overhead is 0.1 millisecond
and that all processes are long-running tasks. Describe is the CPU utilization for
a round-robin scheduler when:
a. The time quantum is 1 millisecond
b. The time quantum is 10 milliseconds
- a. The time quantum is 1millisecond: Irrespective of which process is
scheduled, the scheduler incurs a 0.1 millisecond context-switching cost for
every context-switch. This results in a CPU utilization of 1/1.1 * 100 = 91%.
- b. The time quantum is 10 milliseconds: The I/O-bound tasks incur a context
switch after using up only 1 millisecond of the time quantum. The time
required to cycle through all the processes is therefore 10*1.1 + 10.1 (as each
I/O-bound task executes for 1 millisecond and then incur the context switch
6|Page
task, whereas
7|Page
the CPU-bound task executes for 10 milliseconds before incurring a context
switch). The CPU utilization is therefore 20/21.1 * 100 = 94%.
22. Consider a system implementing multilevel queue scheduling. What strategy can
a computer user employ to maximize the amount of CPU time allocated to the
user’s process?
- The program could maximize the CPU time allocated to it by not fully utilizing
its time quantums. It could use a large fraction of its assigned quantum, but
relinquish the CPU before the end of the quantum, thereby increasing the
priority associated with the process.
23. Consider a preemptive priority scheduling algorithm based on dynamically
changing priorities. Larger priority numbers imply higher priority. When a process
is waiting for the CPU (in the ready queue, but not running), its priority changes
at a rate α; when it is running, its priority changes at a rate β. All processes are
given a priority of 0 when they enter the ready queue. The parameters α and β
can be set to give many different scheduling algorithms.
A. What is the algorithm that results from β > α > 0?
B. What is the algorithm that results from α < β < 0?
- A. FCFS
- B. LIFO
24. Explain the differences in how much the following scheduling algorithms
discriminate in favour of short processes:
A. FCFS
B. RR
C. Multilevel feedback queues
- A. First come first serve scheduling (FCFS).
In this algorithm the process that requires the CPU first is allotted the
CPU first and its implementation can be easily maintained using FIFO
queue.
When a CPU is free, the process is allotted CPU and it will continue
holding CPU fill it is terminated or requests I/O devices. So process
waiting for CPU will have to wait for its execution.
Thus waiting time is large if a larger process executes before a shorter
process.
Thus we can say, FCFS discriminates against short job since any short
job arriving after long job will have a longer waiting time.
- B. Round rotrin scheduling.
This algorithm is designed for time sharing system.
A small unit of time called time quantum or time slice is defined and
each process is switched in and out of CPU depending on this time
quantum value.
The time quantum is generally from 10 ms to 100 ms in length.
Here a process is not allotted CPU more than 1 time quantum.
The performance of RR algorithm depends heavily on the size of time
quantum, if time quantum Is very long time RR policy behave same as
FCFS and if the time quantum is very small (say 1, ms) then RR
approach is called as processor sharing and creates the appearance
8|Page
that each process has its own processor running at 1/n the speed of
real processor.
We can say that, RR – treats all job equally (given them equal burst
time of CPU) so short jb will be able to leave the system faster since
they will finish first.
- C. Multilevel feedback queries.
It allow the process to move between queues, the idea is to separate
processes according to its characteristics of its burst time.
If the process uses too much CPU time it will be moved to lower
priority queue due to this scheme all I/O bounded and inter ache
process are in higher priority queue.
In addition, to this a process which wait too long in a lower priority
queue may be moved to a higher priority queue and this form of aging
prevents saturation.
Thus we can say, multilevel feedback queues work similar to the RR
algorithm so they discriminates favorably towards short job.
25. Using the windows scheduling algorithm, determine the numeric priority of each
of the following threads.
A. A thread in the REALTIME PRIORITY CLASS with a relative priority of
NORMAL
B. A thread in the ABOVE NORMAL PRIORITY CLASS with a relative priority of
HIGHEST
C. A thread in the BELOW NORMAL PRIORITY CLASS with a relative priority of
ABOVE NORMAL
- A. 26
- B. 8
- C. 14
26. Assuming that no threads belong to the REALTIME PRIORITY CLASS and that
none may be assigned a TIME CRITICAL priority, what combination of priority
class and priority corresponds to the highest possible relative priority in Windows
scheduling?
- HIGH priority class and HIGHEST priority within that class. (numeric priority of
15)
27. Consider the scheduling algorithm in the Solaris operating system for time-
sharing threads.
A. What is the time quantum (in milliseconds) for a thread with priority 10? With
priority 55?
B. Assume a thread with priority 35 has used its entire time quantum without
blocking. What new priority will the scheduler assign this thread?
C. Assume a thread with priority 35 blocks for I/O before its time quantum has
expired. What new priority will the scheduler assign this thread?
- A. 160 and 40
- B. 35
- C. 54
28. Assume that two tasks A and B are running on a Linus system. The nice values
of A and B are -5 and +5, respectively. Using the CFS scheduler as a guide,
describe
9|Page
how the respective values of vruntime vary between the two processes given each
of the following scenarios:
Both A and B are CPU-bound.
A is I/O bound, and B is CPU-bound
A is CPU-bound, and B is I/O bound
- Since A has a higher priority than B, vruntime will move more slowly for A than
B. If both A and B are CPU-bound (that is they both use the CPU for as long
as it is allocated to them), vruntime will generally be smaller for A than B, and
hence A will have a greater priority to run over B.
- In this situation, vruntime will be much smaller for A than B as (1) vruntime
will move more slowly for A than B due to priority differences, and (2) Awill
require less CPU-time as it is I/O-bound.
- This situation is not as clear, and it is possible that B may end up running in
favor of A as it will be using the processor less than A and in fact its value of
vruntime may in fact be less than the value of vruntime for B.
29. Discuss ways in which the priority inversion problem could be addressed in a
real- time system. Also discuss whether the solutions could be implemented
within the context of a proportional share scheduler.
- The priority inversion problem could be addressed by temporarily changing
the priorities of the processes involved. Processes that are accessing
resources needed by a higher-priority process inherit the higher priority until
they are finished with the resources in question. When they are finished, their
priority reverts to its original value. This solution can be easily implemented
within a proportional share scheduler; the shares of the high-priority
processes are simply transferred to the lower-priority process for the duration
when it is accessing the resources.
30. Under what circumstances is rate-monotonic scheduling inferior to earliest-
deadline-first scheduling in meeting the deadlines associated with processes?
- Consider two processes P1 and P2 where p1 = 50, t1 = 25 and p2 = 75, t2 =
30. If P1 were assigned a higher priority than P2, then the following
scheduling events happen under rate-monotonic scheduling. P1 is scheduled
at t = 0, P2 is scheduled at t = 25, P1 is scheduled at t = 50, and P2 is
scheduled at t = 75. P2 is not scheduled early enough to meet its deadline.
The earliest deadline schedule performs the following scheduling events: P1
is scheduled at t = 0, P2 is scheduled at t = 25, P1 is scheduled at t = 55, and
so on. This schedule actually meets the deadlines and therefore earliest-
deadline-first scheduling is more effective than the rate-monotonic scheduler.
31. Consider two processes, P1 and P2, where P1 = 50, t1 = 25, P2 = 75, and t2 =30.
a. Can these two processes be scheduled using rate-monotonic scheduling?
Illustrate your answer using a Gantt chart such as the ones in Figure 6.16 –
Figure 6.19.
b. Illustrate the scheduling of these two processes using earliest deadline-first
(EDF) scheduling.
- a. Consider when P1 is assigned a higher priority than P2 with the rate
monotonic scheduler. P1 is scheduled at t = 0, P2 is scheduled at t = 25, P1
is scheduled at t = 50, and P2 is scheduled at t = 75. P2 is not scheduled
early
10 | P a g e
enough to meet its deadline. When P1 is assigned a lower priority than P2,
then P1 does not meet its deadline since it will not be scheduled in time.
1 2
- Since P1 has the earliest deadline, it will be scheduled first. After P1
completes its execution, P2 will be scheduled.
Time Process
0-25 P1
25-55 P2
55-75 P1
75-105 P2
105-125 P1
so on…
32. Explain why interrupt and dispatch latency times must be bounded in hard real-
time system.
- The following tasks: save the currently executing instruction, determine the
type of interrupt, save the current process state, and then invoke the
appropriate interrupt service routine. Dispatch latency is the cost associated
with stopping one process and starting another. Both interrupt and dispatch
latency needs to be minimized in order to ensure that real-time tasks receive
immediate attention. Furthermore, sometimes interrupts are disabled when
kernel data structures are being modified, so the interrupt does not get
serviced immediately. For hard real-time systems, the time-period for which
interrupts are disabled must be bounded in order to guarantee the desired
quality of service
11 | P a g e