Operating System Mid Term Exam Revision Note
Operating System Mid Term Exam Revision Note
Operating Systems
Mid-Term Examination Revision Note
Table of Contents
Description Pag
e
Lecture 01 Introduction to Operating Systems 03
Tutorial 01 and Computer System Structures 14
Lecture 02 18
Operating System Structures
Tutorial 02 32
Lecture 03 35
Processes and Threads
Tutorial 03 48
Lecture 04 51
CPU Scheduling
Tutorial 04 60
Lecture 05 64
Process Synchronization
Tutorial 05 74
Lecture 06 77
Deadlocks
Tutorial 06 89
Past Examination Paper Questions 91
TSN2101 Operating Systems Revision Note
4. Computer Startup
Bootstrap program is loaded at power-up or reboot
Typically stored in ROM or EPROM, generally known as firmware
Initializes all aspects of system
Loads operating system kernel and starts execution
Interrupt transfers control to the interrupt service routine generally, through the
interrupt vector, which contains the addresses of all the service routines.
Interrupt architecture must save the address of the interrupted instruction
Incoming interrupts are disabled while another interrupt is being processed to prevent a
lost interrupt
A trap is a software-generated interrupt caused either by an error or a user request
An operating system is interrupt driven
7. Interrupt Handling
The operating system preserves the state of the CPU by storing registers and the
program counter
Separate segments of code determine what action should be taken for each type of
interrupt
These codes are accessed through interrupt vector – the interrupt vector contains the
addresses of where these codes are located in memory
8. Interrupt Timeline
9. I/O Structure
After I/O starts, control returns to user program only upon I/O completion
14. Caching
Important principle, performed at many levels in a computer (in hardware, operating
system, software)
Information in use copied from slower to faster storage temporarily
Faster storage (cache) checked first to determine if information is there
If it is, information used directly from the cache (fast)
If not, data copied to cache and used there
Cache smaller than storage being cached
Cache management important design problem
Cache size and replacement policy
The purpose of booting is to load the kernel from secondary storage to primary memory and
initialize the registers, memory and I/O devices and their controllers. After the booting the
O/S is ready to accept commands and to provide services to application programs and users.
The booting starts when you switch on the power or resets the computer.
3. What are the resources of a computer? Why the O/S is called a resource allocator?
Resources of a computer are CPU cycles, memory Space, File Storage Space, I/O devices
and so on. The operating system acts as the manager of these resources. Facing numerous and
possibly conflicting requests for resources, the operating system must decide how to allocate
them to specific programs and users so that it can operate the computer system efficiently and
fairly.
4. List the four steps that are necessary to run a program on a completely dedicated machine —
a computer that is running only that program.
5. How does the distinction between kernel mode and user mode function as a rudimentary
form of protection (security) system?
The distinction between kernel mode and user mode provides a rudimentary form of
protection in the following manner. Certain instructions could be executed only when the
CPU is in kernel mode.
Similarly, hardware devices could be accessed only when the program is executing in kernel
mode. Control over when interrupts could be enabled or disabled is also possible only when
the CPU is in kernel mode. Consequently, the CPU has very limited capability when
executing in user mode, thereby enforcing protection of critical resources.
6. What is the purpose of interrupts? What are the differences between a trap and an interrupt?
Can traps be generated intentionally by a user program? If so, for what purpose?
The following operations need to be privileged: Set value of timer, clear memory, turn off
interrupts, modify entries in device-status table, access I/O device. The rest can be performed
in user mode.
8. Describe the differences between symmetric and asymmetric multiprocessing. What are three
advantages and one disadvantage of multiprocessor systems?
Symmetric multiprocessing treats all processors as equals, and I/O can be processed on any
CPU. Asymmetric multiprocessing has one master CPU and the remainder CPUs are slaves.
The master distributes tasks among the slaves, and I/O is usually done by the master only.
Multiprocessors can save money by not duplicating power supplies, housings, and
peripherals. They can execute programs more quickly and can have increased reliability.
They are also more complex in both hardware and software than uniprocessor systems.
9. Direct memory access is used for high-speed I/O devices in order to avoid increasing the
CPU’s execution load.
How does the CPU interface with the device to coordinate the transfer?
How does the CPU know when the memory operations are complete?
The CPU is allowed to execute other programs while the DMA controller is transferring data.
Does this process interfere with the execution of the user programs? If so, describe what
forms of interference are caused.
The CPU can initiate a DMA operation by writing values into special registers that can be
independently accessed by the device. The device initiates the corresponding operation once
it receives a command from the CPU.
When the device is finished with its operation, it interrupts the CPU to indicate the
completion of the operation. Both the device and the CPU can be accessing memory
simultaneously. The memory controller provides access to the memory bus in a fair manner
to these two entities. A CPU might therefore be unable to issue memory operations at peak
speeds since it has to compete with the device in order to obtain access to the memory bus.
10. Give two reasons why caches are useful. What problems do they solve? What problems do
they cause? If a cache can be made as large as the device for which it is caching (for instance,
a cache as large as a disk), why not make it that large and eliminate the device?
Caches are useful when two or more components need to exchange data, and the
components perform transfers at differing speeds.
Caches solve the transfer problem by providing a buffer of intermediate speed between
the components. If the fast device finds the data it needs in the cache, it need not wait for
the slower device.
The data in the cache must be kept consistent with the data in the components. If
a component has a data value change, and the datum is also in the cache, the cache must
also be updated. This is especially a problem on multiprocessor systems where more than
one process may be accessing a datum.
A component may be eliminated by an equal-sized cache, but only if:
(a) the cache and the component have equivalent state-saving capacity (that is, if the
component retains its data when electricity is removed, the cache must retain data as
well), and
(b) the cache is affordable, because faster storage tends to be more expensive.
11. Consider an SMP system similar to what is shown below. Illustrate with an example how data
residing in memory could in fact have two different values in each of the local caches.
Say processor 1 reads data A with value 5 from main memory into its local cache. Similarly,
processor 2 reads data A into its local cache as well. Processor 1 then updates A to 10.
However, since A resides in processor 1’s local cache, the update only occurs there and not in
the local cache for processor 2.
In a digital computer CPUs and multiple device controllers are connected through a common
bus, called system bus, to shared memory. For this reason it is called bus organized
computer. An analog computer is not a bus organized computer.
As the CPU Registers are very fast, memory should be compatible with them. Fast storage
tends to be large, expensive and power hungry. We can use levels of increasingly large (and
increasingly slow) storage, with the most likely information to be used soon stored at each
level. Often the information at each larger level is a superset of the information stored at the
next smaller level.
Memory levels get slower and larger as they get farther from the processor.
Many types of resources – Some (such as CPU cycles, main memory, and file
storage) may have special allocation code, others (such as I/O devices) may have
general request and release code
Accounting – To keep track of which users use how much and what kinds of computer
resources
Protection and security – The owners of information stored in a multiuser or
networked computer system may want to control use of that information, concurrent
processes should not interfere with each other
Protection involves ensuring that all access to system resources is controlled
Security of the system from outsiders requires user authentication, extends to
defending external I/O devices from invalid access attempts
If a system is to be protected and secure, precautions must be instituted throughout
it. A chain is only as strong as its weakest link.
6. System Calls
16. UNIX
UNIX – limited by hardware functionality, the original UNIX operating system had
limited structuring. The UNIX OS consists of two separable parts
Systems programs
The kernel
o Consists of everything below the system-call interface and above the physical
hardware
o Provides the file system, CPU scheduling, memory management, and other
operating-system functions; a large number of functions for one level
Traditional UNIX System Structure
The operating system is divided into a number of layers (levels), each built on top of
lower layers. The bottom layer (layer 0), is the hardware; the highest (layer N) is the
user interface.
With modularity, layers are selected such that each uses functions (operations) and
services of only lower-level layers
Layered Operating System
20. Modules
Most modern operating systems implement kernel modules
Uses object-oriented approach
Each core component is separate
Each talks to the others over known interfaces
Each is loadable as needed within the kernel
Overall, similar to layers but with more flexible
Solaris Modular Approach
A virtual machine takes the layered approach to its logical conclusion. It treats
hardware and the operating system kernel as though they were all hardware
A virtual machine provides an interface identical to the underlying bare hardware
The operating system host creates the illusion that a process has its own processor and
(virtual memory)
Each guest provided with a (virtual) copy of underlying computer
VMware Architecture
System calls allow user-level processes to request services of the operating system.
2. List five services provided by an operating system, and explain how each creates
convenience for users. In which cases would it be impossible for user-level programs to
provide these services? Explain your answer.
The five main services are, not inclusive of accounting, and protection and security:
Program execution
The operating system loads the contents (or sections) of a file into memory and begins its
execution. A user level program could not be trusted to properly allocate CPU time.
I/O operations
Disks, tapes, serial lines, and other devices must be communicated with at a very low level.
The user need only specify the device and the operation to perform on it, while the system
converts that request into device- or controller-specific commands. User-level programs
cannot be trusted to access only devices they should have access to and to access them only
when they are otherwise unused.
File-system manipulation
There are many details in file creation, deletion, allocation, and naming that users should not
have to perform.
Blocks of disk space are used by files and must be tracked. Deleting a file requires removing
the name file information and freeing the allocated blocks. Protections must also be checked
to assure proper file access. User programs could neither ensure adherence to protection
methods nor be trusted to allocate only free blocks and de-allocate blocks on file deletion.
Communications
Message passing between systems requires messages to be turned into packets of
information, sent to the network controller, transmitted across a communications medium,
and reassembled by the destination system. Packet ordering and data correction must take
place. Again, user programs might not coordinate access to the network device, or they might
receive packets destined for other processes.
Error detection
Error detection occurs at both the hardware and software levels. At the hardware level, all
data transfers must be inspected to ensure that data have not been corrupted in transit. All
data on media must be checked to be sure they have not changed since they were written to
the media. At the software level, media must be checked for data consistency; for instance,
whether the numbers of allocated and unallocated blocks of storage match the total number
on the device. There, errors are frequently process independent (for instance, the corruption
of data on a disk), so there must be a global program (the operating system) that handles all
types of errors. Also, by having errors processed by the operating system, processes need not
contain code to catch and correct all the errors possible on a system.
System programs can be thought of as bundles of useful system calls. They provide basic
functionality to users so that users do not need to write their own programs to solve common
problems.
4. Describe three general methods for passing parameters to the operating system.
5. Would it be possible for the user to develop a new command interpreter using the system-call
interface provided by the operating system?
A user should be able to develop a new command interpreter using the system-call interface
provided by the operating system. The command interpreter allows a user to create and
manage processes and also determine ways by which they communicate (such as through
pipes and files). As all of this functionality could be accessed by a user level program using
the system calls, it should be possible for the user to develop a new command-line
interpreter.
6. What is the main advantage of the microkernel approach to system design? How do user
programs and system services interact in a microkernel architecture? What are the
disadvantages of using the microkernel approach?
7. In what ways is the modular kernel approach similar to the layered approach? In what ways
does it differ from the layered approach?
The modular kernel approach requires subsystems to interact with each other through
carefully constructed interfaces that are typically narrow (in terms of the functionality that is
exposed to external modules). The layered kernel approach is similar in that respect.
However, the layered kernel imposes a strict ordering of subsystems such that subsystems at
the lower layers are not allowed to invoke operations corresponding to the upper-layer
subsystems. There are no such restrictions in the modular- kernel approach, wherein modules
are free to invoke each other without any constraints.
8. What is the relationship between a guest operating system and a host operating system in a
system like VMware? What factors need to be considered in choosing the host operating
system?
A guest operating system provides its services by mapping them onto the functionality
provided by the host operating system. A key issue that needs to be considered in choosing
the host operating system is whether it is sufficiently general in terms of its system-call
interface in order to be able to support the functionality associated with the guest operating
system.
2. Process in Memory
3. Process State
As a process executes, it changes state
new: The process is being created
running: Instructions are being executed
waiting: The process is waiting for some event to occur
ready: The process is waiting to be assigned to a processor
terminated: The process has finished execution
Diagram of Process State
9. Schedulers
Long-term scheduler (or job scheduler) – selects which processes should be brought
into the ready queue
Short-term scheduler (or CPU scheduler) – selects which process should be executed
next and allocates CPU
Addition of Medium Term Scheduling
Parent process create children processes, which, in turn create other processes,
forming a tree of processes
Generally, process identified and managed via a process identifier (pid)
Resource sharing
Parent and children share all resources
Children share subset of parent’s resources
Parent and child share no resources
Execution
Parent and children execute concurrently
Parent waits until children terminate
Address space
Child duplicate of parent
Child has a program loaded into it
UNIX examples
fork system call creates new process
exec system call used after a fork to replace the process’ memory space with a new
program
Cooperating process can affect or be affected by other processes, including sharing data
Reasons for cooperating processes:
Information sharing
Computation speedup
Modularity
Convenience
Cooperating processes need interprocess communication (IPC)
Two models of IPC
Shared memory
Message passing
16. Synchronization
Message passing may be either blocking or non-blocking
Blocking is considered synchronous
Blocking send has the sender block until the message is received
Blocking receive has the receiver block until a message is available
Non-blocking is considered asynchronous
Non-blocking send has the sender send the message and continue
Non-blocking receive has the receiver receive a valid message or null
Benefits
Responsiveness
Resource Sharing
Economy
Scalability
Examples
Windows XP/2000
Solaris
Linux
Tru64 UNIX
Mac OS X
One-to-One Model
Each user-level thread maps to kernel thread
Examples
o Windows NT/XP/2000
o Linux
o Solaris 9 and later
Many-to-Many Model
Allows many user level threads to be mapped to many kernel threads
At any given point in time, while a process in execution, this process can be uniquely
characterized by a number of elements that are known as process elements. They are given as
follows:
Identifier: A unique identifier associated with this process, to distinguish it from all other
processes.
State: If the process is currently executing, it is in the running state.
Priority: Priority Level relative to other processes.
Program counter: The address of the next instruction in the program to be executed.
Memory pointers: Includes pointers to the program code and data associated with this
process, plus any memory blocks shared with other processes.
Context data: These are data that are present in registers in the processor while the
process is executing.
I/O status information: Includes outstanding I/O requests, I/O devices (e.g., tape drives)
assigned to this process, a list of files in use by the process, and so on.
Accounting information: May include the amount of processor time and clock time used,
time limits, account numbers, and so on.
In general, the operating system must save the state of the currently running process and
restore the state of the process scheduled to be run next. Saving the state of a process
typically includes the values of all the CPU registers in addition to memory allocation.
Context switches must also perform many architecture-specific operations, including flushing
data and instruction caches.
3. Describe the differences among short-term, medium-term, and long term scheduling.
Short-term (CPU scheduler): selects from jobs in memory those jobs that are ready to
execute and allocates the CPU to them.
Medium-term: used especially with time-sharing systems as an intermediate scheduling
level. A swapping scheme is implemented to remove partially run programs from
memory and reinstate them later to continue where they left off.
Long-term (job scheduler): determines which jobs are brought into memory for
processing.
The primary difference is in the frequency of their execution. The short term must select a
new process quite often. Long-term is used much less often since it handles placing jobs in
the system and may wait a while
4. What are two differences between user-level threads and kernel-level threads? Under what
circumstances is one type better than the other?
User-level threads are unknown by the kernel, whereas the kernel is aware of kernel
threads.
On systems using either M:1 or M:N mapping, user threads are scheduled by the thread
library and the kernel schedules kernel threads.
Kernel threads need not be associated with a process whereas every user thread belongs
to a process. Kernel threads are generally more expensive to maintain than user threads as
they must be represented with a kernel data structure.
5. What resources are used when a thread is created? How do they differ from those used when
a process is created?
Because a thread is smaller than a process, thread creation typically uses fewer resources
than process creation. Creating a process requires allocating a process control block (PCB), a
rather large data structure.
The PCB includes a memory map, list of open files, and environment variables. Allocating
and managing the memory map is typically the most time-consuming activity. Creating either
a user or kernel thread involves allocating a small data structure to hold a register set, stack,
and priority.
6. Describe the actions taken by a kernel to context-switch between kernel level threads.
Context switching between kernel threads typically requires saving the value of the CPU
registers from the thread being switched out and restoring the CPU registers of the new
thread being scheduled.
Increased responsiveness – Even if one thread is blocked, other threads can still run.
Resource sharing: – Code sharing, and memory sharing.
Economy – Creation of threads uses fewer resources as compared to processes.
Taking advantage of multiprocessors – Threads can run parallel on different processors.
8. Draw the process state diagram depicting the five states that a process can be in.
9. Which of the following components of program state are shared across threads in a
multithreaded process?
a) Register values
b) Heap memory
c) Global variables
d) Stack memory
The threads of a multithreaded process share heap memory and global variables. Each thread
has its separate set of register values and a separate stack.
3. CPU Scheduler
Selects from among the processes in memory that are ready to execute, and allocates
the CPU to one of them
CPU scheduling decisions may take place when a process:
i. Switches from running to waiting state
ii. Switches from running to ready state
iii. Switches from waiting to ready
iv. Terminates
Scheduling under 1 and 4 is non-preemptive
All other scheduling is preemptive
4. Dispatcher
Dispatcher module gives control of the CPU to the process selected by the short-term
scheduler; this involves:
Dispatch latency – time it takes for the dispatcher to stop one process and start another
running
5. Scheduling Criteria
CPU utilization – keep the CPU as busy as possible
Throughput – # of processes that complete their execution per time unit
Turnaround time – amount of time to execute a particular process
Waiting time – amount of time a process has been waiting in the ready queue
Response time – amount of time it takes from when a request was submitted until the
first response is produced, not output (for time-sharing environment)
a. Case 1
Process Burst Time
P1 24
P2 3
P3 3
Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
(3+ 16+9+0 )
Average waiting time: =7
4
9. Priority Scheduling
A priority number (integer) is associated with each process
The CPU is allocated to the process with the highest priority
(smallest integer ≡ highest priority)
Preemptive
Non-preemptive
SJF is a priority scheduling where priority is the predicted next CPU burst time
Problem ≡ Starvation – low priority processes may never execute
Solution ≡ Aging – as time progresses increase the priority of the process
1. Why is it important for the scheduler to distinguish I/O-bound programs from CPU-bound
programs?
I/O-bound programs have the property of performing only a small amount of computation
before performing I/O. Such programs typically do not use up their entire CPU quantum.
CPU-bound programs, on the other hand, use their entire quantum without performing any
blocking I/O operations. Consequently, one could make better use of the computer’s
resources by giving higher priority to I/O-bound programs and allow them to execute ahead
of the CPU-bound programs.
Preemptive scheduling allows a process to be interrupted in the midst of its execution, taking
the CPU away and allocating it to another process. Non-preemptive scheduling ensures that a
process relinquishes control of the CPU only when it finishes with its current CPU burst.
3. Consider the following set of processes, with the length of the CPU-burst time given in
milliseconds:
a) Draw the Gantt charts illustrating the execution of these processes using FCFS, Non-
Preemptive SJF and RR (quantum=2) scheduling.
b) What is the turnaround time of each process for each of the scheduling algorithms?
c) What is the waiting time of each process for each of the scheduling algorithms?
d) Which of the schedules in Q3 results in the minimal average waiting time (over all
processes)?
a)
FCFS RR SJF
P1 10 21 10
P2 11 3 12
P3 13 10 14
P4 13 6 8
P5 17 15 17
FCFS RR SJF
P1 0 11 0
P2 9 1 10
P3 10 7 11
P4 12 5 7
P5 12 10 12
0+9+10+12+12 43
FCFS: = =8.6
5 5
11 +1+ 7+5+10 34
RR: = =6.8 (Minimum)
5 5
0+10+11+7+12 40
SJF: = =8
5 5
4. Explain the difference in the degree to which the following scheduling algorithms
discriminate in favor of short processes.
a) FCFS
b) RR
a) FCFS – discriminates against short jobs since any short jobs arriving after long jobs will
have a longer waiting time.
b) RR – treats all jobs equally (giving them equal bursts of CPU time) so short jobs will be
able to leave the system faster since they will finish first.
5. Consider the following processes with the length of a CPU burst time given in milliseconds.
The processes are arrived according to the arrival time. (Low numbers represent high
priority)
a) Draw 3 (three) Gantt Charts illustrating the execution of these processors using
scheduling.
i. Non-preemptive SJF
ii. Preemptive priority scheduling
iii. Round Robin (quantum=3) scheduling
b) Calculate the average waiting time for Preemptive priority and Round Robin Scheduling
a)
i. Non-preemptive SJF
b)
Waiting Time (Preemptive priority)
P0 : ( 6−0 )+ ( 18−13 ) =6+5=11
P1 : ( 41−4 )=37
P2 : ( 19−7 )=12
P3 : ( 13−13 )=0
P4 : ( 28−9 )=19
P5 : ( 0−0 )=0
2. Producer
while (true) {
/* produce an item and put in nextProduced */
while (count == BUFFER_SIZE); // do nothing
buffer [in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
count++;
}
3. Consumer
while (true) {
while (count == 0); // do nothing
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
count--;
/* consume the item in nextConsumed */
}
4. Race Condition
count++ could be implemented as
register1 = count
register1 = register1 + 1
count = register1
register2 = count
register2 = register2 – 1
count = register2
6. Peterson’s Solution
Two process solution
Assume that the LOAD and STORE instructions are atomic; that is, cannot be
interrupted.
The two processes share two variables:
int turn;
Boolean flag[2]
The variable turn indicates whose turn it is to enter the critical section.
The flag array is used to indicate if a process is ready to enter the critical section.
flag[i] = TRUE implies that process Pi is ready!
do {
flag[i] = TRUE;
turn = j;
while (flag[j] && turn == j);
critical section
flag[i] = FALSE;
remainder section
} while (TRUE);
8. Synchronization Hardware
Many systems provide hardware support for critical section code
Uniprocessors – could disable interrupts
Currently running code would execute without preemption
Generally too inefficient on multiprocessor systems
o Operating systems using this not broadly scalable
Modern machines provide special atomic hardware instructions
o Atomic = non-interruptable
do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);
10. Semaphore
Synchronization tool that does not require busy waiting
wait (s) {
while s <= 0; // no-op
s--;
}
Unlock
signal (s) {
s++;
}
With each semaphore there is an associated waiting queue. Each entry in a waiting
queue has two data items:
value (of type integer)
pointer to next record in the list
Two operations:
block – place the process invoking the operation on the appropriate waiting queue.
wakeup – remove one of processes in the waiting queue and place it in the ready
queue.
Implementation of wait:
wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S->list;
block();
}
}
Implementation of signal:
signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
}
}
do {
wait (empty);
wait (mutex);
signal (mutex);
signal (full);
} while (TRUE);
do {
wait (full);
wait (mutex);
// remove an item from buffer to nextc signal (mutex);
signal (empty);
} while (TRUE);
Readers-Writers Problem
A data set is shared among a number of concurrent processes
o Readers – only read the data set; they do not perform any updates
do {
wait (wrt);
// writing is performed
signal (wrt);
} while (TRUE);
do {
wait (mutex);
readcount++;
if (readcount == 1)
wait (wrt);
signal (mutex)
// reading is performed wait (mutex);
readcount--;
if (readcount == 0)
signal (wrt);
signal (mutex);
} while (TRUE);
Dining-Philosophers Problem
Shared data
o Bowl of rice (data set)
o Semaphore chopstick [5] initialized to 1
The structure of Philosopher i:
do {
wait (chopstick[i]);
wait (chopstick[(i + 1) % 5]);
// eat
signal (chopstick[i]);
signal (chopstick[(i + 1) % 5]);
// think
} while (TRUE);
16. Monitors
monitor monitor-name
{
// shared variable declarations
procedure P1 (...) {......}
...
procedure Pn (...) {......}
...
initialization code (...) {
...
}
}
Solaris Synchronization
Implements a variety of locks to support multitasking, multithreading (including
real-time threads), and multiprocessing
Uses adaptive mutexes for efficiency when protecting data from short code
segments
Uses condition variables and readers-writers locks when longer sections of code
need access to data
Uses turnstiles to order the list of threads waiting to acquire either an adaptive
mutex or reader-writer lock
Windows XP Synchronization
Uses interrupt masks to protect access to global resources on uniprocessor systems
Uses spinlocks on multiprocessor systems
Also provides dispatcher objects which may act as either mutexes and semaphores
Dispatcher objects may also provide events
o An event acts much like a condition variable
Linux Synchronization
Linux:
o Prior to kernel Version 2.6, disables interrupts to implement short critical
sections
o Version 2.6 and later, fully preemptive
Linux provides:
o semaphores
o spin locks
1. What three conditions must be satisfied in order to solve the critical section problem?
In a solution to the critical section problem, no thread may be executing in its critical section
if a thread is currently executing in its critical section. Furthermore, only those threads that
are not executing in their critical sections can participate in the decision on which process
will enter its critical section next. Finally, a bound must exist on the number of times that
other threads are allowed to enter their critical state after a thread has made a request to enter
its critical state.
2. Write two short methods that implement the simple semaphore wait() and signal()
operations on global variable, S.
wait (S) {
while (S <= 0);
S--;
}
signal (S) {
S++;
}
The scenario involves five philosophers sitting at a round table with a bowl of food and five
chopsticks. Each chopstick sits between two adjacent philosophers. The philosophers are
allowed to think and eat. Since two chopsticks are required for each philosopher to eat, and
only five chopsticks exist at the table, no two adjacent philosophers may be eating at the
same time. A scheduling problem arises as to who gets to eat at what time. This problem is
similar to the problem of scheduling processes that require a limited number of resources.
4. Race conditions are possible in many computer systems. Consider a banking system with the
following two functions: deposit(amount) and withdraw(amount). These two
functions are passed the amount that is to be deposited or withdrawn from a bank account.
Assume a shared bank account exists between a husband and wife and concurrently the
husband calls the withdraw() function and the wife calls deposit(). Describe how a
race condition is possible and what might be done to prevent the race condition from
occurring.
Assume the balance in the account is 250.00 and the husband calls withdraw(50) and the
wife calls deposit(100). Obviously the correct value should be 300.00 Since these two
transactions will be serialized, the local value of balance for the husband becomes 200.00,
but before he can commit the transaction, the deposit(100) operation takes place and
updates the shared value of balance to 300.00 We then switch back to the husband and the
value of the shared balance is set to 200.00 - obviously an incorrect value.
5. How does the signal() operation associated with monitors differ from the corresponding
operation defined for semaphores?
The signal() operation associated with monitors is not persistent in the following sense:
if a signal is performed and if there are no waiting threads, then the signal is simply ignored
and the system does not remember that the signal took place. If a subsequent wait operation
is performed, then the corresponding thread simply blocks. In semaphores, on the other hand,
every signal results in a corresponding increment of the semaphore value even if there are no
waiting threads. A future wait operation would immediately succeed because of the earlier
increment.
#define MAX_RESOURCES 5
int available_resources = MAX_RESOURCES;
3. System Model
Resource types R1 , R2 , … , R m
CPU cycles, memory space, I/O devices
Each resource type Ri has W i instances.
Each process utilizes a resource as follows:
request
use
release
4. Deadlock Characterization
Deadlock can arise if four conditions hold simultaneously:
Mutual exclusion: only one process at a time can use a resource
Hold and wait: a process holding at least one resource is waiting to acquire
additional resources held by other processes
No preemption: a resource can be released only voluntarily by the process holding
it, after that process has completed its task
Circular wait: there exists a set { P0 , P 1 , … , P0 } of waiting processes such that
P0 is waiting for a resource that is held by P1 , P1 is waiting for a resource
that is held by P2 ,… , Pn−1 is waiting for a resource that is held by Pn , and
P0 is waiting for a resource that is held by P0 .
5. Resource-Allocation Graph
A set of vertices V and a set of edges E
V is partitioned into two types:
P= { P1 , P2 , … , Pn } , the set consisting of all the processes in the system
R= { R 1 , R2 , … , R m } , the set consisting of all resource types in the system
request edge – directed edge Pi R j
assignment edge – directed edge R j Pi
Process
Pi requests instance of Rj
Pi is holding an instance of Rj
Basic Facts
If graph contains no cycles no deadlock
If graph contains a cycle
o if only one instance per resource type, then deadlock
o if several instances per resource type, possibility of deadlock
7. Deadlock Prevention
Restrain the ways request can be made
Mutual Exclusion – not required for sharable resources; must hold for nonsharable
resources
Hold and Wait – must guarantee that whenever a process requests a resource, it
does not hold any other resources
o Require process to request and be allocated all its resources before it begins
execution, or allow process to request resources only when the process has none
o Low resource utilization; starvation possible
No Preemption –
o If a process that is holding some resources requests another resource that cannot
be immediately allocated to it, then all resources currently being held are
released
o Preempted resources are added to the list of resources for which the process is
waiting
o Process will be restarted only when it can regain its old resources, as well as the
new ones that it is requesting
Circular Wait – impose a total ordering of all resource types, and require that each
process requests resources in an increasing order of enumeration
8. Deadlock Avoidance
Requires that the system has some additional a priori information available
Simplest and most useful model requires that each process declare the maximum
number of resources of each type that it may need
The deadlock-avoidance algorithm dynamically examines the resource-allocation
state to ensure that there can never be a circular-wait condition
Resource-allocation state is defined by the number of available and allocated
resources, and the maximum demands of the processes
9. Safe State
When a process requests an available resource, system must decide if immediate
allocation leaves the system in a safe state
System is in safe state if there exists a sequence ¿ P1 , P2 , … , Pn >¿ of ALL the
processes is the systems such that for each Pi , the resources that Pi can still
request can be satisfied by currently available resources +¿ resources held by all the
P j , with j<i
That is:
If Pi resource needs are not immediately available, then Pi can wait until all
P j have finished
When P j is finished, Pi can obtain needed resources, execute, return
allocated resources, and terminate
When Pi terminates, Pi+1 can obtain its needed resources, and so on
Basic Facts
If a system is in safe state no deadlocks
If a system is in unsafe state possibility of deadlock
Avoidance ensure that a system will never enter an unsafe state.
The optimistic assumption is that there will not be any form of circular wait in terms of
resources allocated and processes making requests for them. This assumption could be
violated if a circular wait does indeed occur in practice.
c) Yes. Since
i. ( 0,4,2,0 ) ≤ Need i=( 0,7,5,0 )
ii. ( 0,4,2,0 ) ≤ Available=( 1,5,2,0 )
iii. The new system state after the allocation is made is
3. Explain the four conditions that must hold for deadlock to occur.
Mutual exclusion: Only one process at a time can use a resource. If another
process requests that resource, the requesting process must be delayed until the resource
has been released.
Hold and wait: A process holding at least one resource is waiting to acquire additional
resources held by other processes.
No preemption: Resources cannot be preempted; that is a resource can be released only
voluntarily by the process holding it, after that process has completed its task.
Circular wait: There exists a set { P0 , P 1 , … , P0 } of waiting processes such that P0
is waiting for a resource that is held by P1 , P1 is waiting for a resource that is held
by P2 ,… , Pn−1 is waiting for a resource that is held by Pn , and P0 is waiting
for a resource that is held by P0 .
4. Consider a system consisting of four resources of the same type that are shared by three
processes, each of which needs at most two resources. Show that the system is deadlock-free.
Suppose the system is deadlocked. This implies that each process is holding one resource and
is waiting for one more. Since there are three processes and four resources, one process must
be able to obtain two resources. This process requires no more resources and, therefore it will
return its resources when done.
(Trimester 1, 2007/2008)
Question 1
a) What are the three main purposes of an operating system? (3 Marks)
b) Operating system control processes and allocated resources to processes. In order to control
resource and processes, OS needs to know the current status of each of the entity that it is
managing. So the operating system constructs and maintains four types of tables. Explain
each of the tables. (4 Marks)
d) A computer has a cache, main memory, and a disk used for virtual memory. If a referenced
word is in the cache, 20 ns are required to access it. If it is in main memory but not in the
cache, 60 ns are needed to load it into the cache (this includes the time to originally check the
cache), and then the reference is started again. If the word is not in main memory, 12 ns are
required to fetch the word from disk, followed by 60 ns to copy it to the cache, and then the
reference is started again. The cache hit ratio is 0.9 and the main-memory hit ratio is 0.6.
What is the average time is ns required to access a referenced word on this system?
(5 Marks)
Question 2
a) List and briefly define three techniques for I/O operations. (3 Marks)
b) Direct memory access is used for high-speed I/O devices in order to avoid increasing the
CPU’s execution load.
i) How does the CPU interface with the device to coordinate the transfer? (2 Marks)
ii) The CPU is allowed to execute other programs while the DMA controller is transferring
data. Does this process interfere with the execution of the user programs? If so, describe
what forms of interference are caused. (2 Marks)
c) What are the main advantages of the microkernel approach to system design? (3 Marks)
d) What two advantages do threads have over multiple processes? What major disadvantage do
they have? Thread has two types of scheduling. Distinguish both. (5 Marks)
Question 3
a) Describe the actions a kernel takes to context switch between processes. (3 Marks)
b) Consider the following set of processes, with the length of processing and arrival times are
given in milliseconds:
Process Processing time Arrival time
P1 4 0
P2 5 2
P3 4 2
P4 5 6
P5 4 4
ii) Draw the Gantt charts illustrating the execution of these processes using FCFS,
Preemptive SJF and RR (quantum=3) scheduling. (3 Marks)
iii) Calculate the average waiting time for each algorithm and determine which algorithm is
the best and why? (3 Marks)
c) Consider a system with 10 printers, 2 plotters and 2 scanners with 4 processes, A, B, C and D
have the following requirements and allocation for these resources.
ALLOCATION MAXIMUM
Proces Printer Plotter Scanne Process Printer Plotter Scanner
s r
A 4 1 0 A 9 1 1
B 1 0 0 B 4 0 1
C 3 0 1 C 7 1 1
D 2 0 0 D 3 0 1
(Trimester 3, 2007/2008)
Question 1
a. Describe two general roles of an operating systems and elaborate why these roles are
important. (4 marks)
b. Multiprogramming and multitasking is the situation where more than one process is
apparently able to execute simultaneously. Describe how this is possible or achievable in a
computer system that has only ONE processor (CPU). (4 marks)
c. What is a process and state any three (3) attributes of a process. (4 marks)
d. State any three (3) relationships between threads and processes. (3 marks)
Question 2
a. What is the function of the ready queue? (2 marks)
b. Consider the following scenario where there are 4 processes arriving at different times and
having the CPU burst time as recorded in the table below.
ii. Calculate the waiting time and turnaround time for every process under the Shortest Job
First (Pre-emptive) algorithm. (4 marks)
iii. What is the average turnaround time for processes running under the Shortest Job First
(Pre-emptive) algorithm? (1 mark)
c. Describe two differences between the short-term scheduler and the long-term scheduler with
respect to process management in operating systems. (4 marks)
Question 3
a. Assuming the operating system detects that the system is deadlocked. What can it (operating
system) do to recover from deadlock? (3 marks)
b. What must Banker’s Algorithm know a priori in order to prevent deadlock? (2 marks)
c. Consider the following situation using the Banker’s Algorithm. Assuming the system has a
total of 12 instances of one device type.
Determine the remaining needs for each job in the system. (2 marks)
Determine if the system is safe or unsafe. If the system is in a safe state, list the sequence of
requests and releases that will make it possible for all processes to run till completion. If the
system is unsafe, show how it is possible for deadlock to occur. (4 marks)
(Trimester 3, 2008/2009)
QUESTION 1
a) What are the TWO main functions of an operating system?
(2 marks)
c) For each of the following system calls, give a condition that causes it to fail: fork, exec, and
unlink.
(3 marks)
d) Draw a diagram to indicate the THREE states that a process can be in and the transition
between them. Explain the transitions.
(3 marks)
QUESTION 3
a) Define deadlocks. List the strategies for dealing with them.
(5 marks)
QUESTION 4
a) What are the FOUR conditions needed to avoid race conditions?
(4 marks)
b) Consider the following set of processes, with the length of the CPU-burst time given in
milliseconds:
The processes are assumed to have arrived in the order P1, P2, P3, P4, P5, all at time 0.
i) Draw four Gantt charts illustrating the execution of these processes using FCFS, SJF, a
non-preemptive priority (a smaller priority number implies a higher priority), and RR
(quantum=1) scheduling.
ii) What is the turnaround time of each process for each of the scheduling algorithms?
iii) What is the waiting time of each process for each of the scheduling algorithms in?
iv) Which of the schedules in (i) results in the minimal average waiting time (over all
processes)?
(9 marks)
(Trimester 2, 2009/2010)
QUESTION 1
(a) List five services provided by an operating system that are designed to make it more
convenient for users to use the operating system. In what cases it would be impossible for
user-level programs to provide these services? Explain. [5]
(b) Direct memory access is used for high-speed I/O devices in order to avoid increasing the
CPU’s execution load.
(i) How does the CPU interface with the device to coordinate the transfer?
(ii) The CPU is allowed to execute other programs while the DMA controller is transferring
data. Does this process interfere with the execution of the user programs? If so, describe
what forms of interference are caused. [4]
QUESTION 2
(a) What are the benefits of multithreading? [3]
(c) Consider the following set of processes, with the length of the CPU-burst time given in
milliseconds:
The processes are assumed to have arrived in the order P1, P2, P3, P4, and P5 all at time 0.
i) Draw the Gantt charts illustrating the execution of these processes using SJF, and non-
preemptive priority scheduling (a smaller number implies high priority). [2]
ii) What is the turnaround time of each process for each of the scheduling algorithms in part
I? [2]
iii) What is the waiting time of each process for each of the scheduling algorithms in part I?
[2]
(Trimester 1, 2010/2011)
QUESTION 1:
a. (i) Name any FOUR resources of a computer?
(ii) Why an Operating system is called a resource allocator?
(2 marks)
c. State the reason for considering direct memory access (DMA) as an efficient mechanism for
performing I/O data transfer. (2 marks)
d. Describe briefly THREE general methods used to pass parameters to the operating system
during system calls. (3 marks)
QUESTION 2:
a. Name and briefly describe the different states that a process can exist in at any given time.
(4 marks)
b. State the THREE conditions that must be satisfied in order to solve the critical section
problem? (3 marks)
c. Write the short methods that implement the simple semaphore operations wait() and
signal() on global variable S. (3 marks)
QUESTION 3:
a. State the difference between preemptive and non-preemptive scheduling?
(2 marks)
b. Consider the following set of processes, with the length of the CPU-burst and arrival time
given in milliseconds:
Answer the following with respect to Pre-emptive Shortest Job First (SJF) and Round
Robin (RR) (time quantum = 3 milliseconds) scheduling algorithms.
(i) Draw the Gantt charts illustrating the execution of the given processes for each of the
above scheduling algorithms. (3+3=6 marks)
(ii) Identity the turnaround time and waiting time of process P5 for each of the above
scheduling algorithms. (1+1=2 marks)
QUESTION 4:
a. List the FOUR conditions that must hold for deadlock to occur. (2 marks)
b. Consider the following system with FIVE processes and FOUR resource types.
Allocation Max
Process
A B C D A B C D
P0 1 1 1 1 6 2 1 2
P1 1 1 0 0 1 7 4 0
P2 1 1 1 1 2 4 1 4
P3 1 2 2 1 6 6 4 2
P4 1 1 0 1 4 2 0 3
(iii) Find a sequence of processes that will get the system into a safe state.
(5 marks)
(Trimester 3, 2010/2011)
QUESTION 2:
a. What are the THREE issues associated with process creation in an operating system? List
down the possibilities of each issue. (5 marks)
b. State and briefly the TWO general approaches for thread cancellation. (2 marks)
c. Write a short program that illustrates the use of the semaphore mutex to provide mutual
exclusion for a critical section. Assume mutex is initialized to the value 1. (3 marks)
QUESTION 3:
a. Why is it important for the scheduler to distinguish I/O-bound programs from CPU-bound
programs? (3 marks)
b. Consider the following set of processes, with the length of the CPU-burst time and arrival
time given in milliseconds:
(i) Draw the Gantt charts illustrating the execution of these processes using Pre-emptive
Shortest Job First (SJF) and Round Robin (RR) (quantum=4) scheduling. (5 marks)
(ii) What is the turnaround time of P5 for each of the given scheduling algorithms (SJF
and RR)? (1 mark)
(iii) What is the waiting time of P3 for each of the given scheduling algorithms (SJF and
RR)? (1 mark)
QUESTION 4:
a. Draw an example of a deadlock system using a resource allocation graph involving
processes P1 , P2 and P3 . (2 marks)
b. Consider the following system with FIVE processes and FOUR resource types.
A has 9 instances
B has 7 instances
C has 4 instances
D has 5 instances
(iii) Find a sequence of processes that will get the system into a safe state. Write the steps in
detail. (5 marks)
(Trimester 1, 2011/2012)
QUESTION 1:
a. How do asymmetric and symmetric clustering methods provide high availability service?
(2 marks)
b. Draw the diagram of MS-DOS layer structure and state any ONE weakness that can be found
in the MS-DOS layer structure. (3 marks)
c. Draw the process state diagram stating the five different states that a process can be in.
(2 marks)
d. Provide any TWO reasons as to why partially executed processes can be swapped out and
later swapped in by the medium-term scheduler. (2 marks)
e. State and briefly explain any THREE benefits of multithreaded programming. (3 marks)
QUESTION 2:
a. The CPU scheduler selects a process from the processes in memory that are ready to execute
and allocates the CPU to that process, based on a certain scheduling algorithm. State the
FIVE optimization criteria that can be used for comparing the different CPU scheduling
algorithms. (2.5 marks)
b. Consider the following set of processes, with the length of the CPU-burst time and arrival
time given in milliseconds:
Answer the following with respect to the Pre-emptive Shortest Job First and Round Robin
(time quantum = 4 milliseconds) scheduling algorithms.
(i) Draw the Gantt charts illustrating the execution of the given processes for each of the
above scheduling algorithms. (3+4.5=7.5 marks)
(ii) Identify the turnaround time of process P5 and waiting time of process P3 for each
of the above scheduling algorithms. (2 marks)
QUESTION 3:
a. Consider the use of locks as a solution to the critical section problem. Provided a pseudocode
to demonstrate how this is done. (2 marks)
b. (i) State any TWO basic operations that can be performed on a counting semaphore.
(1 mark)
(ii) State the difference between binary and counting semaphores. (1 mark)
c. Consider the following system with FIVE processes and FOUR resource types:
Allocation Max
Process
A B C D A B C D
P0 1 0 1 0 6 4 3 4
P1 1 1 0 0 5 2 2 4
P2 1 2 0 1 4 4 2 4
P3 1 1 2 1 3 2 2 2
P4 1 0 1 1 7 4 4 6
(iii) Find a sequence of processes that will get the system into a safe state. Show all the steps.
(5 marks)
QUESTION 2:
a. The CPU scheduler selects a process from the processes in memory that are ready to execute,
and allocates the CPU to that process based on a certain scheduling algorithm. State the
FOUR circumstances under which a CPU scheduling decision is required. (2 marks)
b. Dispatcher is the module involved in the CPU-scheduling function. State any TWO function
of the dispatcher. (2 marks)
c. Consider the following set of processes, with the length of the CPU-burst time and arrival
time given in milliseconds.
(i) Draw the Gantt chart illustrating the execution of the above processes using Pre-
emptive Shortest Job First scheduling algorithm and calculate the turnaround time for
the processes P1 and P5 . (4 marks)
(ii) Draw the Gantt chart illustrating the execution of the above processes using Round
Robin (quantum = 3) scheduling algorithm and calculate the waiting time for the
processes P2 and P3 . (4 marks)
QUESTION 3:
a. Consider the use of semaphores as a solution to a critical section problem by providing
mutual exclusion. Provide the pseudocode to demonstrate how this is done with the wait and
signal methods. (2 marks)
b. State what you mean by RACE condition as applied to process synchronization. (2 marks)
A has 8 instances
B has 8 instances
C has 8 instances
D has 10 instances
Answer the following questions using the Banker’s Algorithm for deadlock avoidance:
(iii) Identify a sequence of processes that will get the system into a safe state. Show all the
steps. (5 marks)
(Trimester 3, 2011/2012)
QUESTION 1
a) What are the three main purposes of an operating system?
[3 marks]
b) Describe three general methods for passing parameters to the operating system.
[3 marks]
c) List and explain any FIVE elements of the process control block (PCB).
[5 marks]
QUESTION 2
a. What three conditions must be satisfied in order to solve the critical section problem?
[3 marks]
b. Consider the following set of processes, with the length of the CPU-burst time given in
milliseconds:
i) Draw the Gantt charts illustrating the execution of these processes using First-Come-
First-Served (FCFS), Pre-emptive Shortest Job First (SJF) and Round Robin (RR)
(quantum = 4) scheduling.
[9 marks]
ii) What is the turnaround time of P5, for each of the scheduling algorithms?
[1.5 marks]
iii) What is the waiting time of P5, for each of the scheduling algorithms?
[1.5 marks]
QUESTION 3
a. The following program segment is used to manage a finite number of instances of an
available resource. The maximum number of resources and the number of available resources
are declared as follows:
#define MAX_RESOURCES 5
int available resources = MAX_RESOURCES;
When a process wants to retum a number of resources, it calls the increase() function:
b. Consider the following system with FIVE processes and FOUR resource types:
A has 8 instances
B has 8 instances
C has 8 instances
D has 8 instances
(iii) Find a sequence of processes that will get the system into a safe state, if there is any.
[5 marks]
(Trimester 1, 2012/2013)
QUESTION 1:
a. Briefly explain the relationship between an application programming interface (API),
system-call interface, and system calls. (3 marks)
b. State any TWO advantages and ONE disadvantage of using a microkernel approach.
(3 marks)
c. List and briefly explain the FOUR major categories of the benefits of multithreaded
programming. (4 marks)
d. State any TWO reasons for considering direct memory access (DMA) as an efficient
mechanism for performing I/O. (2 marks)
QUESTION 2:
a. Briefly explain the concept of a context switch with respect to process scheduling. (2
marks)
b. State any TWO situations in which a parent process may terminate its children processes.
(2 marks)
(i) Complete the Burst-time column in the table given above, based on the following Gantt
chart, which is given for First-Come-First-Served (FCFS) CPU scheduling algorithm.
(1 mark)
(ii) Draw the Gantt charts illustrating the execution of these processes using Pre-emptive
Shortest Job First (SJF) and Round Robin (RR) (quantum=4) CPU scheduling.
(5 marks)
(iii) What is the turnaround time of the process of P3 , and waiting time of the process
P4 for the SJF and RR algorithms? (2 marks)
QUESTION 3:
a. Briefly describe the dining-philosophers problem and how it relates to process
synchronization problem in operating systems. (3 marks)
b. State the difference between deadlock prevention and deadlock avoidance methods.
(2 marks)
c. Consider the following system with FIVE processes and FOUR resource types:
d. There are TWO different ways that commands can be processed by a command interpreter.
One approach is to allow the command interpreter to contain the code needed to execute the
command. The other approach is to implement the commands through system programs.
State any ONE advantage and any ONE disadvantage of using each of the approaches.
(2 × 2=4 marks)
QUESTION 2:
a. State the role played by dispatcher module in CPU Scheduling. State the THREE tasks
involved in performing that role. (2 marks)
b. State the effect of the following on the performance of a Round Robin (RR) scheduling
algorithm: (2 marks)
(i) Time quantum size that is too long.
(ii) Time quantum size that is too short.
(i) Draw the Gantt charts illustrating the execution of these processes using the following
CPU scheduling algorithms. (3 × 2=6 marks)
First-Come-First-Served (FCFS)
Pre-emptive Shortest Job First (SJF)
Round Robin (RR) (quantum = 4)
(ii) What is the turnaround time of P5 , for the SJF and RR algorithms? (1 mark)
(iii) What is the waiting time of P3 , for the SJF and RR algorithms?
(1 mark)
QUESTION 3:
a. (i) What you mean by the term, busy waiting, with respect to process synchronization?
(ii) State how the semaphore operations can be modified to overcome the disadvantage of
busy waiting? (1+2=3 marks)
A has 10 instances
B has 10 instances
C has 10 instances
D has 10 instances
(iii) Find a sequence of processes that will get the system into a safe state, if there is any.
Show all the required steps. (5 marks)