0% found this document useful (0 votes)
12 views18 pages

OS Viva questions

The document provides a comprehensive overview of Operating Systems (OS), detailing their functions, applications, and key concepts such as process management, memory management, and scheduling. It explains various types of processes, the differences between processes and threads, and the significance of multithreading, concurrency, and parallelism. Additionally, it covers process scheduling algorithms, synchronization, and common issues like race conditions and deadlocks.

Uploaded by

iamrahuljain.101
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views18 pages

OS Viva questions

The document provides a comprehensive overview of Operating Systems (OS), detailing their functions, applications, and key concepts such as process management, memory management, and scheduling. It explains various types of processes, the differences between processes and threads, and the significance of multithreading, concurrency, and parallelism. Additionally, it covers process scheduling algorithms, synchronization, and common issues like race conditions and deadlocks.

Uploaded by

iamrahuljain.101
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

OS Viva questions

1) What is an Operating System?


An Operating System (OS) is an interface over the computer hardware
that lets users interact with the hardware with ease. The OS is a
software that allows the user to do file management, device
management, computation, etc, and also takes care of resource
allocation, memory management, process management, etc.

2) What are the applications of an Operating System?


The major applications of an operating system are:

 Process Management
 Memory Management
 Device Management
 File Management
 Others (Security, Coordination, Accounting, etc)

Process Management
The OS decides which process gets processed when and for how
much time through process scheduling. It also helps different
processes coordinate when they are working with shared resources
and data. It also takes care of handling deadlock scenarios where one
process is waiting on the other process to release a resource and vice
versa.

Memory Management
The OS tracks the status of each memory location and decides which
process is allocated which memory locations, when, and for how long.

Device Management
The OS manages communication with devices through drivers. It
tracks the status of each device and decides which process gets to
communicate with the device, when, and for how long.

File Management
The OS allows users or programs to create, read, update or delete
files and directories. It also keeps track of all the meta details about
the files and directories through a file system.

3) What are the services provided by an Operating


System?
An Operating System provides different services to the users and the
programs. The common services provided by an operating system
are:

 Program Execution: The OS allows program execution by


creating a process for every program that the user wants to
execute including the applications that the user uses.
 Input/Output Operations: The OS handles the input-output
operations through device drivers. The drivers help create an
abstraction over actual system operations by allowing us to
interact with the system using user-friendly devices.
 File System Management: The OS allows the users to create,
read, update or delete files and directories. It also allows setting
access controls on the files and directories.
 Communication: The OS allows communication between
different processes through Inter-Process Communication (IPC).
 Error Detection and Handling: An error can happen in the
hardware (CPU, RAM, I/O devices, etc) or any of the software.
The OS detects and handles the error so that the user can use
the system.
 Resource Allocation and Management: The OS manages all the
resources like memory, CPU, storage, etc, and decides which
resource is allocated which resource, when, and for how long.
 Accounting: The OS accounts and tracks all the activities
including the errors happening on the computer.
 Security and Protection: The OS provides security and protection
through different security and access control functionalities to
save the data from getting stolen or corrupted by malware.

4) What are Kernel, Booting, and System Calls?


The kernel is the core part of the operating system. It facilitates
interactions between the hardware and the software and has complete
control over everything.

On most systems, it is loaded at boot time (when the computer starts).


The kernel is usually loaded after certain tests are done by the OS.
After the kernel is loaded, it takes care of the rest of the booting.

A system call is an interface through which programs request services


from the kernel. The services could be something like creating new
processes, scheduling processes, interacting with hardware, etc.

5) What is Multiprogramming?
Multiprogramming is a technique through which a CPU can execute
multiple jobs thereby increasing CPU utilization. This is generally done
by keeping multiple jobs in the memory and then using some
scheduling algorithm to decide which job is picked when.

6) What is Multitasking?
Multitasking is an extension of multiprogramming in which more than
one job is done at the same time. This is done by switching between
tasks based on the user interaction or based on assigned time. The
switching is very fast, so we do not see any lag when switching
between different applications.

7) What is Multiprocessing?
Multiprocessing is a technique in which there are two or more CPUs
(processors) within the same computer. This allows us to run different
processes on the same computer at the same time. There are certain
components shared between the different processors like memory,
clock, etc.

8) What is a Process?
A process is an instance of a computer program that is getting
executed. A computer program is passive and contains a set of
instructions (code) stored in the file system. A process is active and
works by loading the program into memory and executing it.

A process consists of the following resources:

 Text (machine code of the compiled program)


 Data (global, static, constant, and uninitialized variables)
 Heap (dynamic memory allocation)
 Stack (temporary data such as local variables, function calls,
parameters, return address, etc)

The above image denotes the representation of a process in memory.


Here, the arrows represent that both the Heap and the Stack can grow
depending on the dynamic memory allocation and variables/functions.

9) Explain the Process States and Lifecycle


A process goes through different states in its entire lifecycle. A
process goes through the following states:

 Start: This is the initial state. A process is in this state on its


creation.
 Ready: The process is assigned 'Ready' state when it is ready to
be processed. The process is moved from the 'Start' state to the
'Ready' state after it is moved into memory. The process can be
moved here from the 'Running' state as well if some other
process is moved to the 'Running' state.
 Running: The process is moved to the 'Running' state when it is
assigned to a processor and its processing starts. At a single
point in time, a single process will be in the 'Running' state on a
single processor.
 Waiting: The process is moved to the 'Waiting' state if the
process is waiting for any resource.
 Terminated: The process is moved to the 'Terminated' state once
its execution is completed or it is terminated by the OS.

10) What is a Process Control Block (PCB)?


A process is internally represented by the operating system using a
process control block (PCB). It is also known as Task Control Block.

A PCB contains the following information:

 Process Identification (process id)


 Process State
 Process Control Information (Scheduling Information, Program
Counter, Process Privileges, CPU Registers, etc)

11) What is exec() and fork()?


Processes are created through different system calls. The two most
popular ones are exec() and fork().

 The exec() family of functions replaces the current process


image with a new process image. The exec() functions return
only if an error has occurred. The return value is -1, and errno is
set to indicate the error.
 fork() creates a new process by duplicating the calling process.
The new process is referred to as the child process. The calling
process is referred to as the parent process. The process id of
the child process is returned on calling fork().

12) What is an Orphan Process?


An Orphan Process is an unterminated process whose parent process
has finished or terminated. In Unix-like OS, an orphan process is
adopted by the init process. In some OS, an orphan process is
immediately terminated.

13) What is a Zombie Process?


A Zombie Process is a process that has terminated but still has an
entry in the process table. This generally occurs when the parent
process needs to read the child process's exit status before the child
process can be removed from the table.

14) What are Threads?


A thread of execution is the smallest sequence of instructions that can
be independently managed by a scheduler. Threads are small
components of a process and multiple threads can run concurrently
and share the same code, memory, variables, etc.

Each thread shares the same code, data, and heap blocks but will
have its own stack. Threads are often called lightweight processes
because they have their own stack memory.

Multiple threads are run together through a CPU functionality known


as multithreading.

15) What is the difference between Process and


Threads?
 The primary difference between a process and a thread is that
different processes cannot share the same memory space (code,
variables, etc) whereas different threads in the same process
share the same memory space.
 Threads are lightweight whereas Processes are heavyweight.
 Threads switching does not need to interact with the OS
whereas process switching requires interaction with OS.
 When a thread makes changes to any variable, all other threads
in the same process can see it. The same is not true for different
processes.

16) What is Multithreading?


Multithreading is the ability of a CPU to allow multiple threads of
execution to run concurrently. Multithreading is different from
multiprocessing as multithreading aims to increase the utilization of a
single core. In most modern system architectures, multithreading and
multiprocessing are combined.

17) What is meant by Concurrency and Parallelism?


Concurrent processing means that different tasks are progressing at
the same time. This does not mean that they are running in parallel.
Concurrency works by scheduling different threads to make the
maximum utilization of a single CPU.

Parallelism means that different threads are being processed at the


same time. This can be done by distributing the different threads
across different CPUs.

We can achieve maximum performance by combining concurrency


and parallelism in our applications.

18) What are the advantages and disadvantages of


Multithreading?
The most significant advantages of multithreading are:

 Better CPU Utilization


 Higher Throughput
 More Responsive Programs
 Efficient Utilization of Resources (Multiple Threads vs Multiple
Processes)
Multithreading has a lot of advantages but it can also create issues if
not used well.

The most significant disadvantages of multithreading compared to a


single-threaded program are:

 Complex Design
 Race Conditions
 Deadlocks
 Starvation
 Livelock
 Difficulty to test and debug
 Thread Context Switch

19) What is meant by Process Scheduling?


Multiprogramming is a technique through which a CPU can execute
multiple jobs thereby increasing CPU utilization. This is generally done
by keeping multiple jobs in the memory and then using some
scheduling algorithm to decide which job is picked when.

Process Scheduling is what allows an OS to do multiprogramming.


The OS maintains different queues for the different process states
(Start, Ready, Running, Waiting, etc). The OS Scheduler moves the
processes from one queue to another based on different scheduling
algorithms.

20) What are the different types of Process


Schedulers?
There are three different types of process schedulers that work
together to increase multiprogramming with maximum CPU utilization
and managing processes in memory.

The three types of process schedulers are:

 Long-term Scheduler (Job Scheduler)


 Short-term Scheduler (CPU Scheduler)
 Medium-term Scheduler (Process Swapping Scheduler)

21) What is meant by Long-term Scheduler (Job


Scheduler)?
The Long-term Scheduler controls the degree of multiprogramming by
bringing processes into the 'Ready Queue'. It moves a process from
the 'New' state to the 'Ready' state.

The Long-term Scheduler decides which processes to add to the


queue by making careful selections between CPU-bound processes
and I/O-bound processes to maintain a balance.

Modern Operating Systems don't have Long-term Schedulers.

22) What is meant by Short-term Scheduler (CPU


Scheduler)?
The Short-term Scheduler moves a process from the 'Ready' state to
the 'Running' state. It moves the process from the 'Ready Queue' and
gives it to the CPU for processing using a dispatcher. It tries to
maximize CPU utilization and increase system performance.

Short-term Schedulers take scheduling decisions very frequently and


very fast compared to Long-term and Medium-term schedulers.

23) Wat is meant by Medium-term Scheduler (Process


Swapping Scheduler)?
The Medium-term Scheduler does the job of swapping in and out a
process from main memory to secondary memory (like disk space). It
does so to make the memory available for other processes. It swaps
out processes that are inactive for some time or processes that take
up a lot of memory or processes that have low priority. It swaps the
process in when memory is available or when the process has been
unblocked, etc.
Medium-term Schedulers take scheduling decisions more frequently
and faster compared to Long-term schedulers.

Medium-term Schedulers do the job of a Long-term Scheduler as well


in Modern Operating Systems.

24) What is Context Switching?


A context switch happens when a process is moved out from the CPU
and another process is provided to the CPU. The process is generally
paused and can be restored at a later time. The process state is
changed from 'Running' to 'Waiting' during a context switch.

The major reasons for a context switch are:

 The process is doing I/O operation (Requires I/O processor


instead of CPU)
 The process is waiting for a synchronization operation to
complete
 The process is running for a long-time and may starve other
processes

Context Switches are computationally expensive and so Operating


Systems try to optimize the use of context switches.

Context Switching is an essential part of achieving multitasking.

25)What is Preemptive and Non-Preemptive


Scheduling?
Preemption is the act of temporarily interrupting an executing task with
an intention to resume it later. This is done by schedulers through
context switching.

There are different scheduling algorithms with them being either


preemptive or non-preemptive.

 Preemptive scheduling algorithms preempt a low priority process


with a high priority process based on some criteria.
 Non-preemptive scheduling algorithms don't preempt a process
after the process enters the 'Running' state.

26) What are Process Scheduling Algorithms?


The primary objective of a process scheduling algorithm is to
maximize CPU Utilization by keeping it as busy as possible.

While trying to maximize CPU utilization, the algorithms are also


supposed to have the following objectives:

 Fair allocation of CPU


 Enforce priorities, if present
 Prefer resources holding key resources
 Maximum Throughput
 Minimum Turnaround Time
 Minimum Waiting Time
 Minimum Response Time

27) What are some of the important Process


Scheduling Algorithms?
There are a lot of different process scheduling algorithms. The
following are the most important process scheduling algorithms:

 First-Come-First-Serve (FCFS) Scheduling: Scheduling is


done according to the arrival time of the process. Easy to
implement and understand but has a high average waiting time.
 Shortest Job First (SJF) Scheduling: Processes with the
shortest (CPU burst) time are prioritized. The average waiting
time is low but difficult to know the CPU burst time of the process
in advance.
 Shortest Remaining Time First (SRTF) Scheduling: Pre-
emptive version of SJF. Preempted when there is a job with
lesser remaining time. Same pros and cons as SJF.
 Round-Robin (RR) Scheduling: Preemptive Algorithm where
each process is provided with a fixed time after which it is
preempted. It maintains a cyclical queue. Easy to implement and
avoid starvation but requires a lot of context switches.
 Priority-based Scheduling: Non-preemptive algorithm where
each process is assigned a priority and the process with the
highest priority is picked up. In the case of multiple processes
with the same priority, FCFS is done on them. Gives priority to
important processes but might lead to starvation for low-priority
processes.

28) What are the different types of processes?


Processes can be categorized into two types:

 Independent Processes: The execution of one process does


not affect the execution of another process.
 Cooperative Processes: The execution of one process affects
the execution of another process.

29) What is meant by Process Synchronization?


Process Synchronization is the task of synchronizing cooperative
processes such that the processes do not have access to shared data
or resources at the same time. This helps avoid inconsistency in data.

Thread Synchronization is a similar concept where multiple threads


are trying to access a single variable/data and synchronization needs
to be done so that only one thread can access it at a time.

Note: Both of these terms can be used interchangeably. In general,


'process synchronization' is used to denote both terms. 'Process' is
also used to denote both processes and threads while talking about
synchronization.

30) What is Race Condition?


Let's talk about a banking system. If a person does a transaction at an
ATM, how will the system work?
 The balance would be fetched from DB and stored in a variable
 The amount would be deducted from the balance
 The balance would be updated with the new value in DB

Let's say that the person simultaneously does a transaction using net
banking. The same thing would happen here. Since both of them are
happening simultaneously, the balance variable may be outdated in
one of the transactions. This will allow the user to withdraw more
money than the initial account balance.

Here, the order of execution changes the output. This is known as a


race condition.

A race condition can also happen between multiple processes or


threads where:

 multiple processes are trying to access and update the same


resource like memory, DB, cache, queue, etc
 multiple threads trying to access and update the same variable.

Race conditions lead to bugs in concurrent, multi-threaded


applications.

31) What is meant by a Critical Section?


Critical Section is a code segment where a shared variable is
accessed and/or updated. The critical section should be treated as an
atomic operation to avoid race conditions in the critical section. This is
done by allowing only one thread to enter the critical section at any
given point in time. The other threads should get access to it only after
the previous thread is exited the critical section. There are different
synchronization tools to achieve that.

32) What are the conditions required to solve a Critical


Section Problem?
There are three conditions required to solve a critical section problem:
 Mutual Exclusion: Only one thread/process can be in the critical
section at a point in time.
 Progress: If there is no thread/process in the critical section and
other threads are waiting to enter the critical section then one of
them must be allowed to enter the critical section.
 Bounded Waiting: A process/thread must not wait indefinitely to
enter the critical section. There must be a bound on the number
of other processes/threads that may be allowed to enter the
critical section after a process/thread has requested to enter the
critical section and before the request is granted.

33) What is meant by Semaphore?


There are different techniques to solve the critical section problem.
Semaphore is one of the most significant techniques to manage
concurrent access to resources.

A semaphore is a signaling mechanism used to synchronize access to


a resource. It is implemented using a flag (an integer variable) that is
used to ensure that only the maximum allowed number of
threads/processes can enter the critical section at a point in time.

Let's look at how a semaphore works.

 A thread signals entering the critical section by decrementing the


flag by 1.
 A thread signals exiting the critical section by incrementing the
flag by 1.
 If the flag value is 0, a thread cannot enter the critical section
unless another thread exits the critical section thereby
incrementing the flag making it non-zero,

In general, there are two types of semaphores:

 Binary Semaphore: Initial value of the semaphore flag is 1. This


ensures that only one thread can enter the critical section at a
time. The value of the flag would either be 1 or 0.
 Counting Semaphore: Initial value of the semaphore flag can
be any positive number. The value of the flag denotes the
maximum number of threads that can be in the critical section at
any point in time.

Semaphore is a signaling mechanism as the threads signal whenever


they enter/exit the critical section.

The semaphore flag is generally accessed/modified through two


atomic functions wait() and signal() denoting entering the semaphore
and exiting the semaphore respectively.

Implementation

wait () {

while(flag == 0); //Notice the ; at the end.

flag--;

signal () {

flag++;
}

34) What is meant by Mutual Exclusion (Mutex)?


Mutex (mutual exclusion) is a locking mechanism used to synchronize
access to a resource. It is a lock that needs to be acquired by a
thread/process to get access to the resource. Only one thread/process
can have access to a mutex lock at a point in time.

35) What is the difference between Mutex and


Semaphore?
Mutex is often confused with a binary semaphore. Let's look at the
differences between mutex and semaphore.
 Semaphore is a signaling mechanism whereas mutex is a
locking mechanism.
 There is ownership associated with mutex whereas no
ownership is associated with semaphore.
 Only the thread that acquired the mutex lock can release the
lock. In the case of semaphore, any process/thread can
increment/decrement the semaphore flag.
 Binary Semaphores are faster than mutex locks as any
thread/process can modify the semaphore flag which is not
possible in mutex.

36) What is meant by a Deadlock?


Deadlock is a situation where two or more processes can't proceed
further as they have cyclical dependencies of resources.

Example

 Process P1 holds Resource R1 and requires Resource R2 as


well.
 Process P2 holds Resource R2 and requires Resource R1 as
well.
 Processes P1 and P2 are held in a deadlock as both are
dependent on resources held by each other. This way both of
them cannot proceed further.

Necessary conditions for a deadlock


 Mutual Exclusion: At least one resource must be in a non-
shareable mode to avoid other processes from accessing it when
it is used by a process.
 Hold-and-wait: A process must be holding a resource and
waiting for another.
 No preemption: A resource can be released only by the process
that is holding it.
 Circular wait: There must be a cycle. Processes waiting for
resources must be part of a cycle.
37) How can a Deadlock be handled?
A deadlock can be handled using three methods:

 Ignoring the deadlock (Assuming that a deadlock will never


occur).
 Detecting and Recovering from deadlock (Allowing deadlock to
happen and then recovering from it).
 Preventing deadlock (Not allowing deadlocks to happen by
preventing one of the four necessary deadlock conditions from
occurring).

38) What is Deadlock Detection and Recovery?


A deadlock can be detected by looking for a cycle in the resource-
allocation graph.

After detection, deadlock recovery can be done by using one of the


two methods:

 Process Termination: Terminating one or more processes


stuck in the deadlock. Fast but expensive as recomputation
needs to be done on restart.
 Resource Preemption: Preempting one or more processes to
release the deadlock.

39) What is meant by Deadlock, Starvation, and


Livelock?
 Deadlock is a situation where two or more processes can
proceed further as they have cyclical dependencies of resources.
In a deadlock, the processes are stuck in the 'Waiting' state.
 Livelock is a situation where two or more processes continuously
do the same thing in response to changes to each other. In a
livelock, the processes are stuck in the 'Running' state but are
not making any progress.
 Starvation is a problem where a process is not able to progress.
This generally happens as processes are blocked from
accessing required resources. Livelock is a specific case of
Starvation.

40) What is meant by a Pipe in OS?


The connection or communication medium between two or more
processes that are interrelated is called a Pipe. It uses message
passing for the communication between the processes. It is useful
when there is a need for one-way communication between the
processes (Inter-Process Communication - IPC). The communication
between the processes is achieved when one of the processes writes
into the pipe and the other reads from the pipe.

You might also like