QBank
QBank
PART A
S.No Questions
1 What are the objectives of operating system?
1. Convenience
2. Efficiency
3. Ability to evolve
Process-id
Process State
Process Priority
Accounting Information
Program Counter
7 List the different operating system services.
●
Program Execution
●
I/O Operations
●
File System Manipulation
●
Communication
8 What is booting?
Computer booting is a process of loading operating system into computer's main
memory/random access memory (RAM) and then preparing the system for users to
run applications when the computer is switched on. The computer booting process
may only take seconds on modern computer.
The main di+++++fference between GUI and CLI is that the Graphical User
Interface (GUI) allows the user to interact with the system using graphical
elements such as windows, icons, menus while the Command Line Interface (CLI)
allows the user to
interact with the system using commands.
10 Write an example of standard API.
Weather Snippets Google utilizes APIs to display relevant data from user search
queries. One common API usage example we come across on a daily basis is
weather data. Rich weather snippets seem to be commonplace, found on all
platforms, like Google Search, Apple’s Weather app, or even from your smart
home device.
11 Define the importance of system call.
●
If a file system requires the creation or deletion of files. Reading and
writing from files also require a system call.
●
Creation and management of new processes.
●
Network connections also require system calls. This includes sending and
receiving packets.
●
Access to a hardware device such as a printer, scanner etc. requires a system
call.
14 Define dispatcher.
A dispatcher is a special program which comes into play after the scheduler. When
the scheduler completes its job of selecting a process, it is the dispatcher which
takes that process to the desired state/queue. The dispatcher is the module
thatgives a
process control over the CPU after it has been selected by the short-term scheduler.
15 Define scheduler.
First Generation
Second Generation
Third Generation
Fourth Generation
First Generation
Serial Processing
The evolution of operating systems began with serial processing. It marks the start
of the development of electronic computing systems as alternatives to mechanical
computers. Because of the flaws in mechanical computing devices, humans'
calculation speed is limited, and they are prone to making mistakes. Because there
is no operating system in this generation, the computer system is given instructions
that must be carried out immediately.
Programmers were incorporated into hardware components without using an
operating system by the 1940s and 1950s. The challenges here are scheduling and
setup time. The user logs in for machine time by wasting computational time. Setup
time is required when loading the compiler, saving the compiled program, the
source program, linking, and buffering. The process is restarted if an intermediate
error occurs.
Third Generation
Multi-Programmed Batched System
The evolution of operating systems embarks the third generation with multi-
programmed batched systems. In the third generation, the operating system was
designed to serve numerous users simultaneously. Interactive users can
communicate with a computer via an online terminal, making the operating system
multi-user and multiprogramming. It is used to execute several jobs that should be
kept in the main memory. The processor determines which program to run through
job scheduling algorithms.
Fourth Generation
The operating system is employed in this age for computer networks where users
are aware of the existence of computers connected to one another.
The era of networked computing has already begun, and users are comforted by a
Graphical User Interface (GUI), which is an incredibly comfortable graphical
computer interface. In the fourth generation, the time-sharing operating system and
the Macintosh operating system came into existence.
Example: Mac OS X 10.6.8 snow leopard and OS X 10.7.5 Lion are some examples
of macintosh OS.
2 Explain the purpose and importance of system calls with example.
The interface between a process and an operating system is provided by system
calls. In general, system calls are available as assembly language instructions. They
are also included in the manuals used by the assembly level programmers. System
calls are usually made when a process in user mode requires access to a resource.
Then it requests the kernel to provide the resource via a system calls.
If a file system requires the creation or deletion of files. Reading and writing from
files also require a system call.
Creation and management of new processes.
Network connections also require system calls. This includes sending and receiving
packets.
Access to a hardware device such as a printer, scanner etc. requires a system call.
Types of System Calls
There are mainly five types of system calls. These are explained in detail as follows
-
Process Control
These system calls deal with processes such as process creation, process
termination etc.
File Management
These system calls are responsible for file manipulation such as creating a file,
reading a file, writing into a file etc.
Device Management
These system calls are responsible for device manipulation such as reading from
device buffers, writing into device buffers etc.
Information Maintenance
These system calls handle information and its transfer between the operating system
and the user program.
Communication
These system calls are useful for interprocess communication. They also deal with
creating and deleting a communication connection.
Some of the examples of all the above types of system calls in Windows and Unix
are given as follows -
open()
The open() system call is used to provide access to a file in a file system. This
system call allocates resources to the file and provides a handle that the process
usesto refer to the file. A file can be opened by multiple processes at the same time
or berestricted to one process. It all depends on the file organization and file system.
read()
The read() system call is used to access data from a file that is stored in the file
system. The file to read can be identified by its file descriptor and it should be
opened using open() before it can be read. In general, the read() system calls takes
three arguments
i.e. the file descriptor, the buffer which stores read data and the number of bytes to
be read from the file.
write()
The write() system call writes the data from a user buffer into a device such as a
file. This system call is one of the ways to output data from a program. In general,
the write() system calls takes three arguments i.e. the file descriptor,the pointer to
the buffer where data is stored and the number of bytes to write from the buffer.
close()
The close() system call is used to terminate access to a file system. Using this
system call means that the file is no longer required by the program and so the
buffers are flushed, the file metadata is updated and the file resources are de-
allocated.
wait()
In some systems, a process may wait for another process to complete its execution.
This happens when a parent process creates a child process and the execution of the
parent process is suspended until the child process executes. The suspending of the
parent process occurs with a wait() system call. When the child process completes
execution, the control is returned back to the parent process.
This system call runs an executable file in the context of an already running process.
It replaces the previous executable file. This is known as an overlay. The original
process identifier remains since a new process is not created but data, heap, stack
etc. of the process are replaced by the new process.
fork()
Processes use the fork() system call to create processes that are a copy of
themselves. This is one of the major methods of process creation in
operatingsystems. When a parent process creates a child process and the execution
of the parent process is suspended until the child process executes. When the child
process completes execution, the control is returned back to the parent process.
exit()
The exit() system call is used by a program to terminate its execution. In a
multithreaded environment, this means that the thread execution is complete. The
operating system reclaims resources that were used by the process after the exit()
system call.
kill()
The kill() system call is used by the operating system to send a termination signal to
a process that urges the process to exit. However, kill() system call does not
necessarily mean killing the process and can have various meanings.
3 Consider the following set of processes, with the length of the CPU burst given in
milliseconds:
a) Draw Gantt charts that illustrate the execution of these processes using FCFS
and SJF,Priority,SRF,RR algorithms, b) What is the turnaround time of each
process for each of the scheduling algorithm. c) What is the waiting time of each
process for each of these scheduling algorithms. d) Which of the algorithms results
in the minimum average waiting time.
The processer should know in advance how much time process will take.
Process Waiting
Time P0 0-0=0
P1 5-1=4
P2 14 - 2 = 12
P3 8-3=5
Average Wait Time: (0 + 4 + 12 + 5)/4 = 21 / 4 = 5.25
Processes with same priority are executed on first come first served basis.
Given: Table of processes, and their Arrival time, Execution time, and priority.
Here we are considering 1 is the lowest priority.
Process Waiting
Time P0 0-0=0
P1 11 - 1 = 10
P2 14 - 2 = 12
P3 5-3=2
Average Wait Time: (0 + 10 + 12 + 2)/4 = 24 / 4 = 6
Shortest Remaining Time
Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.
The processor is allocated to the job closest to completion but it can be preempted
by a newer ready job with shorter time to completion.
It is often used in batch environments where short jobs need to give preference.
Once a process is executed for a given time period, it is preempted and other
process executes for a given time period.
Whenever the process creation is taking place process is in a new state and when the
process gets terminated it is in the terminated state or completed state.
The states of the process are stored in Process Control Block(PCB). PCB is a
special data structure that stores information about the process.
Let’s learn about the various states a process can go through in detail in the next
section along with the process state diagram.
Ready State
When the process creation gets completed, the process comes into a ready state.
During this state, the process is loaded into the main memory and will be placed in
the queue of processes which are waiting for the CPU allocation.
When the process is in the creation process is in a new state and when the process
gets created process is in the ready state.
Running State
Whenever the CPU is allocated to the process from the ready queue, the process
state changes to Running.
Terminated or Completed
When the entire set of instructions is executed and the process is completed. The
process is changed to terminated or completed state.During this state the PCB of the
process is also deleted.
It is possible that there are multiple processes present in the main memory at the
same time.
Suspend Ready
So whenever the main memory is full, the process which is in a ready state is
swapped out from main memory to secondary memory. The process is in a ready
state when goes through the transition of moving from main memory to secondary
memory, the state of that process is changed to Suspend Ready state. Once the
mainmemory will have enough space for the process, the process will be brought
back to the main memory and will be in a ready state.
It’s possible that the process is waiting or blocked state can be swapped out to
secondary memory. Let’s understand in which state process in waiting or block state
will go.
Categories of Scheduling
There are two categories of scheduling:
Non-preemptive: Here the resource can’t be taken from a process until the process
completes execution. The switching of resources occurs when the running process
terminates and moves to a waiting state.
Preemptive: Here the OS allocates the resources to a process for a fixed amount of
time. During resource allocation, the process switches from running state to ready
state or from waiting state to ready state. This switching occurs as the CPU may
give priority to other processes and replace the process with higher priority with the
running process.
Process Scheduling Queues
The OS maintains all Process Control Blocks (PCBs) in Process Scheduling
Queues. The OS maintains a separate queue for each of the process states and PCBs
of all processes in the same execution state are placed in the same queue. When the
state of a process is changed, its PCB is unlinked from its current queue and moved
to its new state queue.
2
Not Running
Processes that are not running are kept in queue, waiting for their turn to execute.
Each entry in the queue is a pointer to a particular process. Queue is implemented
by using linked list. Use of dispatcher is as follows. When a process is interrupted,
that process is transferred in the waiting queue. If the process has completed or
aborted, the process is discarded. In either case, the dispatcher then selects a process
from the queue to execute.
Schedulers
Long-Term Scheduler
Short-Term Scheduler
Medium-Term
Scheduler Long Term
Scheduler
It is also called a job scheduler. A long-term scheduler determines which programs
are admitted to the system for processing. It selects processes from the queue and
loads them into memory for execution. Process loads into the memory for CPU
scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs,
such as I/O bound and processor bound. It also controls the degree of
multiprogramming. If the degree of multiprogramming is stable, then the average
rate of process creation must be equal to the average departure rate of processes
leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-
sharing operating systems have no long term scheduler. When a process changes the
state from new to ready, then there is use of long-term scheduler.
When the scheduler switches the CPU from executing one process to
executeanother, the state from the current running process is stored into the process
control block. After this, the state for the process to run next is loaded from its own
PCBand used to set the PC, registers, etc. At that point, the second process can start
executing.
Program Counter
Scheduling information
Base and limit register
value Currently used
register Changed State
I/O State information
Accounting information
UNIT-2
PART-A
S.No Questions
1 Define threads.
A thread is a path of execution within a process. A process can contain multiple
threads. A thread is also known as lightweight process.
Types of Threads
There are two types of threads.
* User Level Thread
* Kernel Level Thread
2 List the various states of threads
(1) Ready
(2) Running
(3) Waiting
(4) Delayed
(5) Blocked Excluding CREATION and FINISHED
3 Differentiate threads and processes
Process is called heavy weight process. A Thread is lightweight as each thread in a
process shares code, data and resources. 9. Process switching uses interface in
operating system. Thread switching does not require to call an operating system and
cause an interrupt to the kernel. 10.
4 Define process synchronization
Critical Section is the part of a program which tries to access shared resources. That
resource may be any resource in a computer like a memory location, Data structure,
CPU or any IO device.
6 List the section of the program.
When a program is loaded into the memory and it becomes a process, it can be
divided into four sections
stack
heap
text
data
7 Indicate the definition of mutex locks.
A mutex lock has a boolean variable available whose value indicates if the lock is
available or not. If the lock is available, a call to acquire () succeeds, and the lock
is then considered unavailable. A process that attempts to acquire an unavailable
lock is blocked until the lock is released.
8 Define deadlock
A deadlock is a situation in which two computer programs sharing the same
resource are effectively preventing each other from accessing the resource,
resultingin both programs ceasing to function.
Mutual exclusion
Hold and wait
No pre-emption
Circular wait
The hold and wait condition states that the process is holding onto a resource/s that
may (or may not) be required by other processes. The key point here is that the
process is holding onto those resources and will not release them until it gets
access
to the requested resources (which are being held by other processes).
14 What are the two methods in semaphore to overcome the wastage of CPU
cycles.
●
Wait(S) or P: If the semaphore value is greater than 0, decrement the
value. Otherwise, wait until the value is greater than 0 and then decrement
it.
●
Signal(S) or V: Increment the value of Semaphore
Write the importance of Peterson solution.
15
Peterson’s solution provides a good algorithmic description of solving the critical-
section problem and illustrates some of the complexities involved in designing
software that addresses the requirements of mutual exclusion, progress, and
bounded waiting. The structure of process Pi in Peterson’s solution.
PART - B
1 Explain about different Multithreading models with a neat diagram
Multithreading Model:
Multithreading allows the application to divide its task into individual threads. In
multi-threads, the same process or task can be done by the number of threads, or we
can say that there is more than one thread to perform the task in multithreading.
With the use of multithreading, multitasking can be achieved.
For example:
Play Video
In an operating system, threads are divided into the user-level thread and the Kernel-
level thread. User-level threads handled independent form above the kernel and
thereby managed without any kernel support. On the opposite hand, the operating
system directly manages the kernel-level threads. Nevertheless, theremust be a form
of relationship between user-level and kernel-level threads.
Sections of a Program
Here, are four essential elements of the critical section:
Entry Section: It is part of the process which decides the entry of a particular
process.
Critical Section: This part allows one process to enter and modify the
sharedvariable.
Exit Section: Exit section allows the other process that are waiting in the Entry
Section, to enter into the Critical Sections. It also checks that a process that finished
its execution should be removed through this Section.
Remainder Section: All other parts of the Code, which is not in Critical, Entry, and
Exit Section, are known as the Remainder Section.
What is Critical Section Problem?
A critical section is a segment of code which can be accessed by a signal process at
a specific point of time. The section consists of shared data resources that required
to be accessed by other processes.
The entry to the critical section is handled by the wait() function, and it is
represented as P().
The exit from a critical section is controlled by the signal() function, represented as
V().
waiting to execute their critical section, need to wait until the current process
completes its execution.
Here are some widely used methods to solve the critical section problem.
Peterson Solution
Peterson’s solution is widely used solution to critical section problems. This
algorithm was developed by a computer scientist Peterson that’s why it is named as
a Peterson’s solution.
In this solution, when a process is executing in a critical state, then the other process
only executes the rest of the code, and the opposite can happen. This method also
helps to make sure that only a single process runs in the critical section at a specific
time.
Example
PROCESS Pi
FLAG[i] = true
while( (turn != i) AND (CS is !free) ){ wait;
}
CRITICAL SECTION FLAG[i] = false
turn = j; //choose another process to go to CS
Assume there are N processes (P1, P2, … PN) and every process at some point of
time requires to enter the Critical Section
A FLAG[] array of size N is maintained which is by default false. So, whenever a
process requires to enter the critical section, it has to set its flag as true. For
example, If Pi wants to enter it will set FLAG[i]=TRUE.
Another variable called TURN indicates the process number which is currently
wating to enter into the CS.
The process which enters into the critical section while exiting would change the
TURN to another number from the list of ready processes.
Example: turn is 2 then P2 enters the Critical section and while exiting turn=3 and
therefore P3 breaks out of wait loop.
Synchronization Hardware
Some times the problems of the Critical Section are also resolved by hardware.
Some operating system offers a lock functionality where a Process acquires a lock
when entering the Critical section and releases the lock after leaving it.
So when another process is trying to enter the critical section, it will not be able to
enter as it is locked. It can only do so if it is free by acquiring the lock itself.
Mutex Locks
Synchronization hardware not simple method to implement for everyone, so strict
software method known as Mutex Locks was also introduced.
In this approach, in the entry section of code, a LOCK is obtained over the critical
resources used inside the critical section. In the exit section that lock is released.
Semaphore Solution
Semaphore is simply a variable that is non-negative and shared between threads. It
is another algorithm or solution to the critical section problem. It is a signaling
mechanism and a thread that is waiting on a semaphore, which can be signaled by
another thread.
It uses two atomic operations, 1)wait, and 2) signal for the process synchronization.
Example
WAIT ( S ):
while ( S <= 0 );
S = S - 1;
S = S + 1;
3 Discuss the importance of Readers-Writers Problem with its algorithm.
The Problem Statement
There is a shared resource which should be accessed by multiple processes. There
are two types of processes in this context. They are reader and writer. Any number
of readers can read from the shared resource simultaneously, but only one writer
can write to the shared resource. When a writer is writing data to the resource, no
other process can access the resource. A writer cannot write to the resource if there
are non zero number of readers accessing the resource at that time.
The Solution
From the above problem statement, it is evident that readers have higher priority
than writer. If a writer wants to write to the resource, it must wait until there are no
readers currently accessing that resource.
Instead of having the process to acquire lock on the shared resource, we use the
mutex m to make the process to acquire and release lock whenever it is updating the
read_count variable.
while(TRUE)
{
wait(w);
signal(w);
}
And, the code for the reader process looks like this:
while(TRUE)
{
//acquire lock
wait(m);
read_count++;
if(read_count == 1)
wait(w);
//release lock
signal(m);
/* perform the reading operation */
// acquire lock
wait(m);
read_count--;
if(read_count ==
0)
signal(w);
// release lock
signal(m);
}
Here is the Code uncoded(explained)
As seen above in the code for the writer, the writer just waits on the w semaphore
until it gets a chance to write to the resource.
After performing the write operation, it increments w so that the next writer can
access the resource.
On the other hand, in the code for the reader, the lock is acquired whenever the
read_count is updated by a process.
When a reader wants to access the resource, first it increments the read_count value,
then accesses the resource and then decrements the read_count value.
The semaphore w is used by the first reader which enters the critical section and the
last reader which exits the critical section.
The reason for this is, when the first readers enters the critical section, the writer is
blocked from the resource. Only new readers can access the resource now.
Similarly, when the last reader exits the critical section, it signals the writer using
the w semaphore because there are zero readers now and a writer can have the
chance to access the resource.
4 Discuss the importance of Dining philosophers Problem with its algorithm.
The dining philosophers problem is another classic synchronization problem which
is used to evaluate situations where there is a need of allocating multiple resources
to multiple processes.
while(TRUE)
{
wait(stick[i]);
/*
mod is used because if i=5, next
chopstick is 1 (dining table is circular)
*/
wait(stick[(i+1) % 5]);
/* eat */
signal(stick[i]);
signal(stick[(i+1) % 5]);
/* think */
}
When a philosopher wants to eat the rice, he will wait for the chopstick at his left
and picks up that chopstick. Then he waits for the right chopstick to be available,
and then picks it too. After eating, he puts both the chopsticks down.
But if all five philosophers are hungry simultaneously, and each of them pickup one
chopstick, then a deadlock situation occurs because they will be waiting for another
chopstick forever. The possible solutions for this are:
A philosopher must be allowed to pick up the chopsticks only if both the left and
right chopsticks are available.
Allow only four philosophers to sit at the table. That way, if all the four
philosophers pick up four chopsticks, there will be one chopstick left on the table.
So, one philosopher can start eating and eventually, two chopsticks will be
available. In this way, deadlocks can be avoided.
5 Explain the bankers algorithm with an example.
It is a banker algorithm used to avoid deadlock and allocate resources safely to each
process in the computer system. The 'S-State' examines all possible tests or
activities before deciding whether the allocation should be allowed to each process.
It also helps the operating system to successfully share the resources between all
theprocesses. The banker's algorithm is named because it checks whether a person
should be sanctioned a loan amount or not to help the bank system safely simulate
allocation resources. In this section, we will learn the Banker's Algorithm in detail.
Also, we will solve problems based on the Banker's Algorithm. To understand the
Banker's Algorithm first we will see a real word example of it.
Suppose the number of account holders in a particular bank is 'n', and the total
money in a bank is 'T'. If an account holder applies for a loan; first, the bank
subtracts the loan amount from full cash and then estimates the cash difference is
greater than T to approve the loan amount. These steps are taken because if another
person applies for a loan or withdraws some amount from the bank, it helps the
bank manage and operate all things without any restriction in the functionality of
the banking system.
Advantages
Following are the essential characteristics of the Banker's algorithm:
Play Video
How much each process can request for each resource in the system. It is denoted
by the [MAX] request.
How much each process is currently holding each resource in a system. It is denoted
by the [ALLOCATED] resource.
It represents the number of each resource currently available in the system. It is
denoted by the [AVAILABLE] resource.
Following are the important data structures terms applied in the banker's algorithm
as follows:
Suppose n is the number of processes, and m is the number of each type of resource
used in a computer system.
Available: It is an array of length 'm' that defines each type of resource available in
the system. When Available[j] = K, means that 'K' instances of Resources type R[j]
are available in the system.
number of resources R[j] (each type) in a system.
Allocation: It is a matrix of m x n orders that indicates the type of resources
currently allocated to each process in the system. When Allocation [i, j] = K, it
means that process P[i] is currently allocated K instances of Resources type R[j] in
the system.
Need: It is an M x N matrix sequence representing the number of
remainingresources for each process. When the Need[i] [j] = k, then process P[i]
may require K more instances of resources type Rj to complete the assigned work.
Nedd[i][j] = Max[i][j] - Allocation[i][j].
Finish: It is the vector of the order m. It includes a Boolean value (true/false)
indicating whether the process has been allocated to the requested resources, and all
resources have been released after finishing its task.
The Banker's Algorithm is the combination of the safety algorithm and the resource
request algorithm to control the processes and avoid deadlock in a system:
Safety Algorithm
It is a safety algorithm used to check whether or not a system is in a safe state or
follows the safe sequence in a banker's algorithm:
1. There are two vectors Wok and Finish of length m and n in a safety algorithm.
2. Check the availability status for each type of resources [i], such as:
Need[i] <=
Work Finish[i]
== false
If the i does not exist, go to step 4.
Finish[i] = true
Go to step 2 to check the status of resource availability for the next process.
4. If Finish[i] == true; it means that the system is safe for all processes.
Let create a resource request array R[i] for each process P[i]. If the Resource
Requesti [j] equal to 'K', which means the process P[i] requires 'k' instances of
Resources type R[j] in the system.
1. When the number of requested resources of each type is less than the Need
resources, go to step 2 and if the condition fails, which means that the process P[i]
exceeds its maximum claim for the resource. As the expression suggests:
2. And when the number of requested resources of each type is less than the
available resource for each process, go to step (3). As the expression suggests:
When the resource allocation state is safe, its resources are allocated to the process
P(i). And if the new state is unsafe, the Process P (i) has to wait for each type of
Request R(i) and restore the old resource-allocation state.
Example: Consider a system that contains five processes P1, P2, P3, P4, P5 and the
three resource types A, B and C. Following are the resources types: A has 10, B has
5 and the resource type C has 7 instances.
Process Allocation
A B C Max
A B C Available
A B C
P1 0 1 0 7 5 3 3 3 2
P2 2 0 0 3 2 2
P3 3 0 2 9 0 2
P4 2 1 1 2 2 2
P5 0 0 2 4 3 3
Answer the following questions using the banker's algorithm:
Process Need
A B C
P1 7 4 3
P2 1 2 2
P3 6 0 0
P4 0 1 1
P5 4 3 1
Hence, we created the context of need matrix.
Now we check if each type of resource request is available for each process.
5, 3, 2 + 2, 1, 1 => 7, 4, 3
7, 4, 3 + 0, 0, 2 => 7, 4, 5
Now, we again examine each type of resource request for processes P1 and P3.
7, 4, 5 + 0, 1, 0 => 7, 5, 5
7, 5, 5 + 3, 0, 2 => 10, 5, 7
Hence, we execute the banker's algorithm to find the safe state and the safe
sequence like P2, P4, P5, P1 and P3.
gets the request immediately.
UNIT-3
PART-A
S.No Questions
1 Define Swapping.
A process needs to be in memory to be executed. However, a process can be
swapped temporarily out of memory to a backing store and then brought back into
memory for continued execution. This process is called swapping
hole.
First fit allocates the first hole that is big enough. Searching can either start at the
beginning of the set of holes or where the previous first-fit search ended. Searching
can be stopped as soon as a free hole that is big enough is found.
4 How is memory protected in a paged environment?
Protection bits that are associated with each frame accomplish memory protection
in a paged environment. The protection bits can be checked to verify that no writes
are being made to a read-only page.
5 Write about External and Internal Fragmentation?
External fragmentation exists when enough total memory space exists to satisfy a
request, but it is not contiguous; storage is fragmented into a large number of small
holes.
When the allocated memory may be slightly larger than the requested memory, the
difference between these two numbers is internal fragmentation.
6 What are Pages and Frames?
Paging is a memory management scheme that permits the physical-address space
of a process to be non-contiguous. In the case of paging, physical memory is
broken into fixed-sized blocks called frames and logical memory is broken into
blocks of
the same size called pages.
7 What is the basic method of Segmentation?
Segmentation is a memory management scheme that supports the user view of
memory. A logical address space is a collection of segments. The logical address
consists of segment number and offset. If the offset is legal, it is added to the
segment base to produce the address in physical memory of the desired byte.
PART-B
S.No Questions
1 Define paging and describe the structure of the page
table with necessary diagrams
The data structure that is used by the virtual memory system in the operating system
of a computer in order to store the mapping between physical and logical addresses
is commonly known as Page Table.
As we had already told you that the logical address that is generated by the CPU is
translated into the physical address with the help of the page table.
Thus page table mainly provides the corresponding frame number (base address of
the frame) where that page is stored in the main memory.
The above diagram shows the paging model of Physical and logical memory.
Generally; the Number of entries in the page table = the Number of Pages in which
the process is divided.
PTBR means page table base register and it is basically used to hold the base
address for the page table of the current process.
Hierarchical Paging
Hierarchical Paging
Another name for Hierarchical Paging is multilevel paging.
There might be a case where the page table is too big to fit in a contiguous space, so
we may have a hierarchy with several levels.
In this type of Paging the logical address space is broke up into Multiple page
tables.
Hierarchical Paging is one of the simplest techniques and for this purpose, a two-
level page table and three-level page table can be used.
As we page the Page table, the page number is further divided into :
As address translation works from outer page table inward so is known as forward-
mapped Page Table.
Below given figure below shows the Address Translation scheme for a two-level
page table
Thus in order to avoid such a large table, there is a solution and that is to divide the
outer page table, and then it will result in a Three-level page table:
This Page table mainly contains a chain of elements hashing to the same elements.
The Virtual Page numbers are compared in this chain searching for a match; if the
match is found then the corresponding physical frame is extracted.
In this scheme, a variation for 64-bit address space commonly uses clustered page
tables.
Mainly used for sparse address spaces where memory references are non-
contiguous and scattered
There is one entry for each virtual page number and a real page of memory
And the entry mainly consists of the virtual address of the page stored in that real
memory location along with the information about the process that owns the page.
Though this technique decreases the memory that is needed to store each page table;
but it also increases the time that is needed to search the table whenever a page
reference occurs.
Given below figure shows the address translation scheme of the Inverted Page
Table:
In this, we need to keep the track of process id of each entry, because many
processes may have the same logical addresses.
Also, many entries can map into the same index in the page table after going
through the hash function. Thus chaining is used in order to handle this.
2 Describe the concept of swapping in memory management with a neat diagram
Swapping is a memory management technique and is used to temporarily remove
the inactive programs from the main memory of the computer system. Any process
must be in the memory for its execution, but can be swapped temporarily out of
memory to a backing store and then again brought back into the memory to
complete its execution. Swapping is done so that other processes get memory for
their execution.
Due to the swapping technique performance usually gets affected, but it also helps
in running multiple and big processes in parallel. The swapping process is also
known as a technique for memory compaction. Basically, low priority processesmay
be swapped out so that processes with a higher priority may be loaded and executed.
Let us understand this technique with the help of a figure given below:
The above diagram shows swapping of two processes where the disk is used as a
Backing store.
There are two more concepts that come in the swapping technique and these are:
swap in and swap out.
Advantages of Swapping
The advantages/benefits of the Swapping technique are as follows:
The swapping technique mainly helps the CPU to manage multiple processes within
a single main memory.
With the help of this technique, the CPU can perform several tasks simultaneously.
Thus, processes need not wait too long before their execution.
Disadvantages of Swapping
The drawbacks of the swapping technique are as follows:
If the algorithm used for swapping is not good then the overall method can increase
the number of page faults and thus decline the overall performance of processing.
If the computer system loses power at the time of high swapping activity then the
user might lose all the information related to the program.
3 Explain Contiguous Memory Allocation with its memory prevention and memory
allocation phases.
In the Contiguous Memory Allocation, each process is contained in a single
contiguous section of memory. In this memory allocation, all the available memory
space remains together in one place which implies that the freely available memory
partitions are not spread over here and there across the whole memory space.
In this partition scheme, each partition may contain exactly one process. There is a
problem that this technique will limit the degree of multiprogramming because the
number of partitions will basically decide the number of processes.
Whenever any process terminates then the partition becomes available for another
process.
Example
Let's take an example of fixed size partitioning scheme, we will divide a memory
size of 15 KB into fixed-size partitions:
It is important to note that these partitions are allocated to the processes as they
arrive and the partition that is allocated to the arrived process basically depends on
the algorithm followed.
1. Internal Fragmentation
Suppose the size of the process is lesser than the size of the partition in that case
some size of the partition gets wasted and remains unused. This wastage inside the
memory is generally termed as Internal fragmentation
3. External Fragmentation
In this partition scheme, as the size of the partition cannot change according to the
size of the process. Thus the degree of multiprogramming is very less and is fixed.
As partition size varies according to the need of the process so in this partition
scheme there is no internal fragmentation.
Example: Consider the Pages referenced by the CPU in the order are 6, 7, 8, 9, 6, 7,
1, 6, 7, 8, 9, 1
As in the above figure shown, Let there are 3 frames in the memory.
6, 7, 8 are allocated to the vacant slots as they are not in memory.
When 9 comes page fault occurs, it replaces 6 which is the oldest in memory or
front element of the queue.
Then 6 comes (Page Fault), it replaces 7 which is the oldest page in memory now.
Similarly, 7 replaces 8, 1 replaces 9.
Then 6 comes which is already in memory (Page Hit).
Then 7 comes (Page Hit).
Then 8 replaces 6, 9 replaces 7. Then 1 comes (Page Hit).
Number of Page Faults = 9
While using the First In First Out algorithm, the number of page faults increases by
increasing the number of frames. This phenomenon is called Belady's Anomaly.
Let's take the same above order of pages with 4 frames.
In the above picture shown, it can be seen that the number of page faults is
10. There were 9 page faults with 3 frames and 10 page faults with 4 frames.
The number of page faults increased by increasing the number of frames.
Optimal Page Replacement - In this algorithm, the page which would be used after
the longest interval is replaced. In other words, the page which is farthest to come in
the upcoming sequence is replaced.
Example: Consider the Pages referenced by the CPU in the order are 6, 7, 8, 9, 6, 7,
1, 6, 7, 8, 9, 1, 7, 9, 6
First, all the frames are empty. 6, 7, 8 are allocated to the frames (Page Fault).
Now, 9 comes and replaces 8 as it is the farthest in the upcoming sequence. 6 and 7
would come earlier than that so not replaced.
Then, 6 comes which is already present (Page Hit).
Then 7 comes (Page Hit).
Then 1 replaces 9 similarly (Page Fault).
Then 6 comes (Page Hit), 7 comes (Page
Hit).
Then 8 replaces 6 (Page Fault) and 9 replaces 8 (Page Fault).
Then 1, 7, 9 come respectively which are already present in the memory.
Then 6 replaces 9 (Page Fault), it can also replace 7 and 1 as no other page is
present in the upcoming sequence.
The number of Page Faults = 8
Example: Consider the Pages referenced by the CPU in the order are 6, 7, 8, 9, 6, 7,
1, 6, 7, 8, 9, 1, 7, 9, 6
First, all the frames are empty. 6, 7, 8 are allocated to the frames (Page Fault).
Now, 9 comes and replaces 6 which is used the earliest (Page Fault).
Then, 6 replaces 7, 7 replaces 8, 1 replaces 9 (Page Fault).
Then 6 comes which is already present (Page Hit).
2. Little OS overhead:
Main memory usage is inefficient. Any program, even the smallest, covers the
entire partition. This can lead to internal fragmentation.
2. External Fragmentation:
The total unused space of different partitions cannot be used to load non-
differentiated processes, even if space is available because spanning is not allowed.
Processes larger than the partition size cannot be accommodated in main memory.
The partition cannot be resized according to the size of the incoming process.
In this, the operating system retains the first partition. The rest of the space is
divided into different sections. The partition size and process size remain the same.
In dynamic segmentation, we can avoid the internal segmentation problem by
resizing the partition according to the needs of the process.
In a certain partition, if the size of the process is greater than the size of the
partition, we cannot keep or load the process into memory. But if we talk about
dynamic partition, process size cannot be set and we can resize partition according
to process size.
3. No internal fragmentation:
So, because of the difficulty of further allocation in dynamic memory space and
every time you have to resize the partition; Therefore, it is difficult for the
operatingsystem to handle everything.
2. External Fragmentation:-
Let’s say we have three processes P1 (2 MB), P2 (5 MB), and P3 (2 MB) and we
want to load the processes into different partitions of main memory.
Now processes P1 and P3 are completed and the space allocated to processes P1
and P3 is free. Now we have 2 MB partition, which is unused and in main memory.
We cannot use this space to load 4 MB process in memory because space is not
contiguous.
The rule says that we can load in memory only if the process remains firmly in the
main memory. So, if we want to avoid external fragmentation, we have to change
this rule.
What is swapping?
It is a type through which a process is swapped from the main memory to the
temporary memory for some time and that memory is emptied for some other
process, after some time the system again sends that program to the secondary.
Swap from memory to main memory.
Although there is an impact in performance due to swapping, but with its help
multiple and large processes can be run simultaneously, hence swapping is also
known as the technique of memory compaction.
The time taken by a process by it, by transferring the process from the mainmemory
to the secondary memory and again by transferring that program from the secondary
memory to the main memory, the time taken by that process again to regain the
space for itself in the main memory.
UNIT-4
PART-A
S.No Questions
1 List the various file attributes.
A file has certain other attributes, which vary from one operating system to
another, but typically consist of these: Name, identifier, type, location, size,
protection, time, and date and user identification
2 What are the various file operations?
The six basic file operations are
●
Creating a file
●
Writing a file
●
Reading a file
●
Repositioning within a file
●
Deleting a file
●
Truncating a file
Consider a system which performs 50% I/O and 50% computes. Doubling the CPU
performance on this system would increase total system performance by only 50%.
Doubling both system aspects would increase performance by 100%. Generally, it
is important to remove the current system bottleneck, and to increase
overallsystem performance, rather than blindly increasing the performance of
individual system components
Each UFD has a similar structure, but lists only the files of a single user. When a
job
starts the system's master file directory (MFD) is searched. The MFD is indexed by
the user name or account number, and each entry points to the UFD for that user.
Determine the most common schemes for defining the logical structure of a
15
directory?
The most common schemes for defining the logical structure of a directory Single-
Level Directory Two-level Directory Tree-Structured Directories Acyclic-Graph
Directories General Graph Directory
PART - B
1 Explain the different disk scheduling algorithms with examples.
A Process makes the I/O requests to the operating system to access the disk. Disk
Scheduling Algorithm manages those requests and decides the order of the disk
access given to the requests.
Eg. Suppose the order of requests are 70, 140, 50, 125, 30, 25, 160 and the initial
position of the Read-Write head is 60.
SCAN
In this algorithm, the disk arm moves in a particular direction till the end and serves
all the requests in its path, then it returns to the opposite direction and moves till the
last request is found in that direction and serves all of them.
Eg. Suppose the order of requests are 70, 140, 50, 125, 30, 25, 160 and the initial
position of the Read-Write head is 60. And it is given that the disk arm should move
towards the larger value.
LOOK
In this algorithm, the disk arm moves in a particular direction till the last request is
found in that direction and serves all of them found in the path, and then reverses its
direction and serves the requests found in the path again up to the last request
found. The only difference between SCAN and LOOK is, it doesn't go to the end it
only moves up to which the request is found.
Eg. Suppose the order of requests are 70, 140, 50, 125, 30, 25, 160 and the initial
position of the Read-Write head is 60. And it is given that the disk arm should move
towards the larger value.
C-SCAN
This algorithm is the same as the SCAN algorithm. The only difference between
SCAN and C-SCAN is, it moves in a particular direction till the last and serves the
requests in its path. Then, it returns in the opposite direction till the end and doesn't
serve the request while returning. Then, again reverses the direction and serves the
requests found in the path. It moves circularly.
Eg. Suppose the order of requests are 70, 140, 50, 125, 30, 25, 160 and the initial
position of the Read-Write head is 60. And it is given that the disk arm should move
towards the larger value.
Eg. Suppose the order of requests are 70, 140, 50, 125, 30, 25, 160 and the initial
position of the Read-Write head is 60. And it is given that the disk arm should move
towards the larger value.
There are different kinds of methods that are used to allocate disk space. We must
select the best method for the file allocation because it will directly affect the
system performance and system efficiency. With the help of the allocation method,
we can utilize the disk, and also files can be accessed.
Contiguous allocation
Extents
Linked allocation
Clustering
FAT
Indexed allocation
Linked Indexed allocation
Multilevel Indexed allocation
Inode
There are different types of file allocation methods, but we mainly use three types
of file allocation methods:
Contiguous allocation
Linked list allocation
Indexed allocation
These methods provide quick access to the file blocks and also the utilization of
Contiguous Allocation: - Contiguous allocation is one of the most used methods for
allocation. Contiguous allocation means we allocate the block in such a manner, so
that in the hard disk, all the blocks get the contiguous physical block.
We can see in the below figure that in the directory, we have three files. In the table,
we have mentioned the starting block and the length of all the files. We can see in
the table that for each file, we allocate a contiguous block.
File Allocation Methods
Example of contiguous
allocation
We can see in the given diagram, that there is a file. The name of the file is ‘mail.’
The file starts from the 19th block and the length of the file is 6. So, the file
occupies 6 blocks in a contiguous manner. Thus, it will hold blocks 19, 20, 21, 22,
23, 24.
same file.
We can see in the below figure that we have a file named ‘jeep.’ The value of the
start is 9. So, we have to start the allocation from the 9th block, and blocks are
allocated in a random manner. The value of the end is 25. It means the allocation is
finished on the 25th block. We can see in the below figure that the block (25)
comprised of -1, which means a null pointer, and it will not point to another block.
In liked list allocation, there is no external fragmentation. Due to this, we can utilize
the memory better.
In linked list allocation, a directory entry only comprises of the starting block
address.
The linked allocation method is flexible because we can quickly increase the size of
the file because, in this to allocate a file, we do not require a chunk of memory in a
contiguous form.
Disadvantages of Linked list Allocation
There are various disadvantages of linked list allocation:
Linked list allocation does not support direct access or random access.
In linked list allocation, we need to traverse each block.
If the pointer in the linked list break in linked list allocation, then the file gets
corrupted.
In the disk block for the pointer, it needs some extra space.
Indexed Allocation
The Indexed allocation method is another method that is used for file allocation. In
the index allocation method, we have an additional block, and that block is known
as the index block. For each file, there is an individual index block. In the index
block, the ith entry holds the disk address of the ith file block. We can see in the
below figure that the directory entry comprises of the address of the index block.
To resolve this problem, there are various mechanism which we can use:
Linked scheme
Multilevel Index
Combined Scheme
Linked Scheme: - In the linked scheme, to hold the pointer, two or more than two
index blocks are linked together. Each block contains the address of the next index
block or a pointer.
Multilevel Index: - In the multilevel index, to point the second-level index block,
we use a first-level index block that in turn points to the blocks of the disk, occupied
by the file. We can extend this up to 3 or more than 3 levels depending on the
maximum size of the file.
Combined Scheme: - In a combined scheme, there is a special block which is called
an information node (Inode). The inode comprises of all the information related to
the file like authority, name, size, etc. To store the disk block addresses that contain
the actual file, the remaining space of inode is used. In inode, the starting pointer is
used to point the direct blocks. This means the pointer comprises of the addresses of
the disk blocks, which consist of the file data. To indicate the indirect blocks, the
next few pointers are used. The indirect blocks are of three types, which are single
indirect, double indirect, and triple indirect.
Inode
In the UNIX operating system, every file is indexed with the help of Inode. An
Inode is a block that is created at the time when the file system is designed.
RAID 0 – striping
RAID 1 – mirroring
RAID 5 – striping with parity
RAID 6 – striping with double parity
RAID 10 – combining mirroring and striping
The software to perform the RAID-functionality and control the drives can either be
located on a separate controller card (a hardware RAID controller) or it can simply
be a driver. Some versions of Windows, such as Windows Server 2012 as well as
Mac OS X, include software RAID functionality. Hardware RAID controllers cost
more than pure software, but they also offer better performance, especially with
RAID 5 and 6.
If you want to use RAID 0 purely to combine the storage capacity of twee drives in
a single volume, consider mounting one drive in the folder path of the other drive.
This is supported in Linux, OS X as well as Windows and has the advantage that a
single drive failure has no impact on the data of the second disk or SSD drive.
Advantages of RAID 10
If something goes wrong with one of the disks in a RAID 10 configuration, the
rebuild time is very fast since all that is needed is copying all the data from the
surviving mirror to a new drive. This can take as little as 30 minutes for drives of 1
TB. Disadvantages of RAID 10
Half of the storage capacity goes to mirroring, so compared to large RAID 5 or
RAID 6 arrays, this is an expensive way to have redundancy.
What about RAID levels 2, 3, 4 and 7?
These levels do exist but are not that common (RAID 3 is essentially like RAID 5
but with the parity data always written to the same drive). This is just a simple
introduction to RAID-systems. You can find more in-depth information on the
pages of Wikipedia or ACNC.
That back-up will come in handy if all drives fail simultaneously because of a
power spike.
It is a safeguard when the storage system gets stolen.
Back-ups can be kept off-site at a different location. This can come in handy if a
natural disaster or fire destroys your workplace.
The most important reason to back-up multiple generations of data is user error. If
someone accidentally deletes some important data and this goes unnoticed
forseveral hours, days, or weeks, a good set of back-ups ensure you can still retrieve
those files.
4 (i) Explain in detail about Application-I/O interface.
I/O Interface:
There is need of surface whenever any CPU wants to communicate with I/O
devices. The interface is used to interpret address which is generated by CPU.
Thus, surface is used to communicate to I/O devices i.e. to share information
between CPU and I/O devices interface is used which is called as I/O Interface.
When most computers turn on, the kernel is one of the first programs to load. It
takes care of the rest of the starting process and requests for memory,
peripherals, and software input/output, transforming them into CPU data-
processing instructions.
The processor is not directly connected to these devices. However, the data
exchanges between them are managed through an interface. This interface
converts system bus signals to and from a format appropriate to the provided
device. I/O registers are used to communicate between these external devices
and the processor.
The kernel provides many I/O services. The kernel provides several functions
that rely on the hardware and device driver infrastructure, such as caching,
scheduling, spooling, device reservation, and error handling.
1. Scheduling
The term "schedule" refers to determining an excellent sequence to perform a
series of I/O requests.
Scheduling can increase the system's overall performance, distribute
deviceaccess permissions evenly among all processes, and reduce average wait
times, response times, and turnaround times for I/O to complete.
When an application makes a blocking I/O system call, the request is placed in
the wait queue for that device, maintained by OS engineers.
2. Buffering
The buffer is a section of main memory used to temporarily store or keep data
sent between two devices or between a device and an application.
Assists in dealing with device speed discrepancies.
Assists in dealing with device transfer size mismatches.
Data is transferred from user application memory into kernel memory.
Data from kernel memory is then sent to the device to maintain "copy
semantics."
It prevents an application from altering the contents of a buffer while it is being
written.
3. Caching
It involves storing a replica of data in a location that is easier to reach than the
original.
When you request a file from a Web page, for example, it is stored on your hard
disc in a cache subdirectory under your browser's directory. When you return to
a page you've recently visited, the browser can retrieve files from the cache
rather than the actual server, saving you time and reducing network traffic.
The distinction between cache and buffer is that cache stores a copy of an
existing data item, whereas buffer stores a duplicate copy of another data item.
4. Spooling
A spool is a buffer that holds jobs for a device until it is ready to take them.
Spooling regards disks as a massive buffer that can hold as many tasks as the
device needs until the output devices are ready to take them.
If the device can only serve one request at a time, a buffer retains output for a
device that cannot handle interleaved data streams.
Spooling allows a user to view specific data streams and, if wanted, delete
them. For example, when you are using a printer.
5. Error Handling
Protected memory operating systems can safeguard against a wide range of
hardware and application faults, ensuring that each tiny mechanical glitch does
not result in a complete system failure.
Devices and I/O transfers can fail for various reasons, including transitory
causes, such as when a network gets overcrowded, and permanent reasons, such
as when a disc controller fails.
6. I/O Protection
System calls are required for I/O. Illegal I/O instructions may be used by user
programs to try to interrupt regular operation, either accidentally or on purpose.
To restrict a user from performing all privileged I/O instructions. System calls
must be used to accomplish I/O. Memory-mapped and I/O port memory ports
both need to be secured.
5 Explain about various levels of directory structure?
A Directory is the collection of the correlated files on the disk. In simple words, a
directory is like a container which contains file and folder. In a directory, we can
store the complete file attributes or some attributes of the file. A directory can be
comprised of various files. With the help of the directory, we can maintain the
information related to the files.
There should be at least one directory that must be present in each partition.
Through it, we can list all the files of the partition. In the directory for each file,
there is a directory entry, which is maintained, and in that directory entry, all the
information related to the file is stored.
directory: Name
Type
Location
Size
Position
Protection
Usage
Mounting
Name: - Name is the name of the directory, which is visible to the user.
Type: - Type of a directory means what type of directory is present such as single-
level directory, two-level directory, tree-structured directory, and Acyclic graph
directory.
Location: - Location is the location of the device where the header of a file is
located. Size: - Size means number of words/blocks/bytes in the file.
Position: - Position means the position of the next-read pointer and the next-write
pointer.
Protection: - Protection means access control on the read/write/delete/execute.
Usage: - Usage means the time of creation, modification, and access, etc.
Mounting: - Mounting means if the root of a file system is grafted into the existing
tree of other file systems.
Operations on Directory
The various types of operations on the directory are:
Creating
Deleting
Searching
List a
directory
Renaming
Link
Unlink
Creating: - In this operation, a directory is created. The name of the directory should
be unique.
Deleting: - If there is a file that we don’t need, then we can delete that file from the
directory. We can also remove the whole directory if the directory is not required.
An empty directory can also be deleted. An empty directory is a directory that only
consists of dot and dot-dot.
Searching: - Searching operation means, for a specific file or another directory, we
can search a directory.
List a directory: - In this operation, we can retrieve all the files list in the directory.
And we can also retrieve the content of the directory entry for every file present in
the list.
If in the directory, we want to read the list of all files, then first, it should be opened,
and afterwards we read the directory, it is a must to close the directory so that the
internal tablespace can be free up.
Single-Level Directory
Two-Level Directory
Tree-Structured Directory
Acyclic Graph Directory
General-Graph Directory
Single-Level Directory: - Single-Level Directory is the easiest directory structure.
There is only one directory in a single-level directory, and that directory is called a
root directory. In a single-level directory, all the files are present in one directory
that makes it easy to understand. In this, under the root directory, the user cannot
create the subdirectories.
Directory Structure in Operating System
Advantages of Single-Level Directory
The advantages of the single-level directory are:
If the size of the directory is large in Single-Level Directory, then the searching
will be tough.
In a single-level directory, we cannot group the similar type of files.
Another disadvantage of a single-level directory is that there is a possibility of
collision because the two files cannot have the same name.
The task of choosing the unique file name is a little bit
complex. Two-Level Directory
Two-Level Directory is another type of directory structure. In this, it is possible to
create an individual directory for each of the users. There is one master node in the
two-level directory that include an individual directory for every user. At the second
level of the directory, there is a different directory present for each of the users.
Without permission, no user can enter into the other user’s directory.
In the two-level directory, various users have the same file name and also directory
name.
Because of using the user-grouping and pathname, searching of files are quite easy.
Disadvantages of Two-Level Directory
The disadvantages of the two-level directory are:
In a two-level directory, one user cannot share the file with another user.
Another disadvantage with the two-level directory is it is not scalable.
Tree-Structured Directory
A Tree-structured directory is another type of directory structure in which the
directory entry may be a sub-directory or a file. The tree-structured directory
reduces the limitations of the two-level directory. We can group the same type of
files into one directory.
In a tree-structured directory, there is an own directory of each user, and any user is
not allowed to enter into the directory of another user. Although the user can read
the data of root, the user cannot modify or write it. The system administrator only
has full access to the root directory. In this, searching is quite effective and we use
the current working concept. We can access the file by using two kinds of paths,
either absolute or relative.
With the help of aliases, and links, we can create this type of directory graph. We
may also have a different path for the same file. Links may be of two kinds, which
are hard link (physical) and symbolic (logical).
If the files are shared through linking, there may be a problem in the case of
deleting.
If we are using softlink, then in this case, if the file is deleted then there is only a
dangling pointer which is left.
If we are using hardlink, in this case, when we delete a file, then we also have to
remove all the reference connected with it.
General-Graph Directory
The General-Graph directory is another vital type of directory structure. In this type
of directory, within a directory we can create cycle of the directory where we can
derive the various directory with the help of more than one parent directory.
The main issue in the general-graph directory is to calculate the total space or size,
taken by the directories and the files.
The General-Graph directory is more flexible than the other directory structure.
Cycles are allowed in the general-graph directory.
Disadvantages of General-Graph Directory
The disadvantages of general-graph directory are:
Linear List
Hash Table
Linear List: - The linear list is the most straightforward algorithm which is used for
directory implementation. In this algorithm, we keep all the files in a directory like
a singly linked list. Every file comprises of a pointer to the data blocks that are
allocated to it and the next file in the directory.
Directory Structure in Operating System
Characteristics of Linear List
In a directory, for every file, there is a key-value pair that is generated, and when
the key-value pair is generated, then we store it into the hash table. With the help of
the hash function on the file name, we can determine the key and key points to the
respective file that are stored in a directory.
In a linear list, the task of searching is difficult because, in a linear list, we have to
search the entire list, but in hash table approach, there is no requirement of
searching the entire list. So, in hash table searching is quite efficient. With the help
of the key, we only have to check the entries of the hash table, and when we get the
entry, then by using the value, we will fetch the corresponding file.
Figure: The Linux KernelFor the purpose of this article we will only be focussing
on the 1st three important subsystems of the Linux Kernel. The basic functioning of
each of the 1st three subsystems is elaborated below:
The Process Scheduler: This kernel subsystem is responsible for fairly distributing
the CPU time among all the processes running on the system simultaneously.
The Memory Management Unit: This kernel sub-unit is responsible for proper
distribution of the memory resources among the various processes running on the
system. The MMU does more than just simply provide separate virtual address
spaces for each of the processes.
The Virtual File System: This subsystem is responsible for providing a unified
interface to access stored data across different filesystems and physical storage
media.
2. Explain different methods used to solve the problem of security at the operating
system level
The term operating system (OS) security refers to practices and measures that can
ensure the confidentiality, integrity, and availability (CIA) of operating systems.
The most common techniques used to protect operating systems include the use of
antivirus software and other endpoint protection measures, regular OS patch
updates, a firewall for monitoring network traffic, and enforcement of secure access
through least privileges and user controls.
Malware
Malware is short for malicious software, which encompasses a range of attack
vectors such as viruses, worms, trojans, and rootkits. Malware is injected into a
system without the owner’s consent, or by masquerading as legitimate software,with
the objective of stealing, destroying or corrupting data, or compromising the device.
Malware can also replicate, allowing it to spread further in a corporate network and
beyond. Malware attacks often go undetected by the target user, allowing for the
quiet extraction of sensitive data. In other cases attackers silently
“herd”compromised devices into botnets and use them for criminal activities
such asdistributed denial of services (DDoS) attacks.
Denial of Service Attacks
A Denial of Service (DoS) attack is intended to clog a system with fake requests so
it becomes overloaded, and eventually stops serving legitimate requests. Some DoS
attacks, in addition to overwhelming a system’s resources, can cause damage to the
underlying infrastructure.
An example of a DoS attack is the repeated use of system requests in a tight loop, or
a “syn flood” in which the attacker sends a large number of network requests,
requiring the server to acknowledge each one, and exhausting its resources.
Network Intrusion
Network intrusion occurs when an individual gains access to a system for improper
use. There are several types of network intrusion depending on the type of intruder:
Threat actors look for buffer overflow vulnerabilities, which they can exploit to
inject scripts that help them hijack the system or crash it.
You can use the following techniques to authenticate users at the operating system
level:
Security keys: keys are provided by a key generator, usually in the form of a
physical dongle. The user must insert the key into a slot in the machine to log in.
Username-password combinations: The user enters a username that is registered
with the OS, along with a matching password.
Biometric signatures: The user scans a physical attribute, such as a fingerprint or
retina, to identify themselves.
Multi-factor authentication: Modern authentication systems use multiple methods to
identify a user, combining something the user knows (credentials), something they
own (such as a mobile device), and/or a physical characteristic (biometrics).
Using One-Time Passwords
One-time passwords offer an additional layer of security when combined with
standard authentication measures. Users must enter a unique password generated
each time they log in to the system. A one-time password cannot be reused.
What is OS virtualization?
OS virtualization enables you to multiple isolated user environments using the same
OS kernel. The technology that creates and enables this type of isolation is called a
“hypervisor”, which serves as a layer located between the device and the virtualized
resources.
The hypervisor manages the virtual machines (VM) running on the device (typically
2-3 Vms). Each VM is used for each user or each security zone. There are several
types of VMs that can run alongside each other. Here are the three main categories:
Fully locked-down VM
Should be used to provide access to sensitive data and corporate systems, such as IT
environments, payment systems, and sensitive customer data.
Unlocked, open VM
Semi-locked-down VM
Advantages of OS virtualization
Each type of VM is limited to the actions allowed by design. Any further action is
restricted. This keeps the environment secure. The hypervisor runs below the OS of
the device and splits the device into multiple VMs running locally with their own
OS—effectively isolating users.
Because the users are isolated, the devices remain secure. This ensures that
employees and third parties can gain access to company resources without
endangering company resources.
Vulnerability Assessment
Vulnerability assessment involves testing for weaknesses that may be lying
undetected in an operating system. Identifying vulnerabilities allows you to identify
possible vectors for an attack so you can better understand the risk to your system.
The following are some of the typical methods used for OS vulnerability
assessment:
Penetration testing helps discover vulnerabilities beyond the obvious, and seeks to
identify the methods an attacker may use to exploit them. Security teams
canleverage the insights provided by pentesting to put in place effective security
measures.
There are three types of penetration testing, each of which provides different types
of insights into operating system security and potential for exploitation:
White Box: The penetration tester has full technical knowledge of the system being
tested.
Grey Box: The pentester has limited technical knowledge of the system being
tested.
Black Box: The pentester doesn’t have any prior technical knowledge of the system
being tested.
Improving Operating System Security with Hysolate
Hysolate is a full OS isolation solution for Windows10 or Windows 11, splitting
your endpoint into a more secure corporate zone and a less secure zone for daily
tasks. This means that one OS can be reserved for corporate access, with strict
networking and security policies, and the other can be a more open zone for
accessing untrusted websites and applications.
Hysolate sits on the user endpoint so provides a good UX, but is managed by a
granular management console via the cloud. This means that admins can monitor
and control exactly what their team is using the isolated OS environment for, and it
can easily be wiped if threats are detected. Hysolate is easy to deploy, and can be
scaled to your entire team, not just the technical members. Hysolate isolates
applications, websites, documents and peripherals, giving you improved security
and manageability.
Try out Hysolate Free today- a free Windows OS solution or get a demo to learn
about Enterprise OS Security.
3. What is protection? Explain principles and goals of protection.
Goals of Protection
Obviously to prevent malicious misuse of the system by users or programs. See
chapter 15 for a more thorough coverage of this goal.
To ensure that each shared resource is used only in accordance with system
policies, which may be set either by system designers or by system administrators.
To ensure that errant programs cause the minimal amount of damage possible.
Note that protection systems only provide the mechanisms for enforcing policies
and ensuring reliable systems. It is up to administrators and users to implement
those mechanisms effectively.
14.2 Principles of Protection
The principle of least privilege dictates that programs, users, and systems be given
just enough privileges to perform their tasks.
This ensures that failures do the least amount of harm and allow the least of harm to
be done.
For example, if a program needs special privileges to perform a task, it is better to
make it a SGID program with group ownership of "network" or "backup" or some
other pseudo group, rather than SUID with root ownership. This limits the amount
of damage that can occur if something goes wrong.
Typically each user is given their own account, and has only enough privilege to
modify their own files.
The root account should not be used for normal day to day activities - The System
Administrator should also have an ordinary account, and reserve use of the root
account for only those tasks which need the root privileges
14.3 Domain of Protection
A computer can be viewed as a collection of processes and objects ( both HW &
SW ).
The need to know principle states that a process should only have access to those
objects it needs to accomplish its task, and furthermore only in the modes for which
it needs access and only during the time frame when it needs access.
The modes available for a particular object may depend upon its type.
14.3.1 Domain Structure
A protection domain specifies the resources that a process may access.
Each domain defines a set of objects and the types of operations that may be
invoked on each object.
An access right is the ability to execute an operation on an object.
A domain is defined as a set of < object, { access right set } > pairs, as shown
below. Note that some domains may be disjoint while others overlap.
Take a look at our Installation guide on iOS in our blog on iOS Installation.
Email Address
Phone Number
2. Core Services Layer
Some of the Important Frameworks available in the core services layers are
detailed:
3. Media Layer: Graphics, Audio and Video technology is enabled using the Media
Layer.
Graphics Framework:
UIKit Graphics – It describes high level support for designing images and also used
for animating the content of your views.
Core Graphics framework – It is the native drawing engine for iOS apps and gives
support for custom 2D vector and image based rendering.
Core Animation – It is an initial technology that optimizes the animation experience
of your apps.
Core Images – gives advanced support for controling video and motionless images
in a nondestructive way
OpenGl ES and GLKit – manages advanced 2D and 3D rendering by hardware
accelerated interfaces
Metal – It permits very high performance for your sophisticated graphics rendering
and computation works. It offers very low overhead access to the A7 GPU.
Read these iOS Interview Questions to grab high-paying jobs!
Audio Framework:
Media Player Framework – It is a high level framework which gives simple use to a
user’s iTunes library and support for playing playlists.
AV Foundation – It is an Objective C interface for handling the recording and
playback of audio and video.
OpenAL – is an industry standard technology for providing audio.
Video Framework
EventKit framework – gives view controllers for showing the standard system
interfaces for seeing and altering calendar related events
GameKit Framework – implements support for Game Center which allows users
share their game related information online
iAd Framework – allows you deliver banner-based advertisements from your app.
MapKit Framework – gives a scrollable map that you can include into your user
interface of app.
PushKitFramework – provides registration support for VoIP apps.
Twitter Framework – supports a UI for generating tweets and support for creating
URLs to access the Twitter service.
UIKit Framework – gives vital infrastructure for applying graphical, event-driven
apps in iOS. Some of the Important functions of UI Kit framework:
-Multitasking support.
– Basic app management and infrastructure.
– User interface management
– Support for Touch and Motion event.
– Cut, copy and paste support and many mor
5. How Digital signature differs from authentication protocols?
Authentication and digital signatures are typically considered two different things:
authentication is about ‘logging in’, while digital signatures are used for expressing
your consent with documents, approving transactions…
Description AuthenticationDigital
Signatures Authenticity User
The user is who (s)he claims to be.
Liveness
The user is present during the interaction with the verifier.
Integrity Data
The signed data, protected with a digital signature, cannot be altered without
invalidating the signature.
Non-repudiation
The user cannot deny afterwards that (s)he put the digital signature.
When looking at the underlying technology there is a sweet spot where both
authentication and digital signatures provide the same properties: liveness, asserting
the link with the user, linking with data, and non-repudiation.
Digital Signature Technology
Digital signature technology is based on public key cryptography. A private key is
used to sign data, while the corresponding public key can be used to verify a
signature. It should be infeasible to derive the private key from the public key or
from signatures if the signature scheme is to be considered secure.
This contrasts with symmetric key cryptography, where both operations (‘signing’
and verification) are performed using the same key. A MAC (message
authentication code) is considered the symmetric counterpart as it also guarantees
the integrity of the data and the link with the possession of the key. Since
asymmetric key is used this has grave implications for the security properties: the
verifier is also able to generate the MAC, so there is no way of knowing who
generated the MAC. This implies that there is no non-repudiation. An allegedauthor
of a MAC can always blame the verifier for generating the MAC. To make matters
more confusing many incorrectly label MACs as digital signatures, despite
fundamentally different security properties.
Authentication Technology
There is a broad range of technology used for authentication: passwords, OTPs,
zero- knowledge protocols, MAC-based protocols and … protocols using digital
signatures.
Authentication happens through an interaction between the user (device) and the
verifier (server). Contrary to digital signatures, where the verifier can perform the
verification at any time in the future.
A Notable Exception
In some rare cases, using digital signatures for authentication, and hence obtaining
non-repudiation, is an unwanted property. An excellent example can be found in
ePassports. The purpose of a passport is to prove your identity. As part of this
process the authenticity of the chip inside the ePassport is validated through an
authentication protocol. The chip essentially proves knowledge of the private key,
corresponding to the public key that is linked by the government to your identity.
One of the design criteria for ePassports was however that the execution of the
protocol would not lead to some kind of proof. Repudiation was crucial. For this
reason, no digital signatures are used in the authentication protocol for the
ePassportchip.
6. Explain goals and principles of system protection in detail.
Protection is especially important in a multiuser environment when multiple users
use computer resources such as CPU, memory, etc. It is the operating system's
responsibility to offer a mechanism that protects each process from other processes.
In a multiuser environment, all assets that require protection are classified as
objects, and those that wish to access these objects are referred to as subjects. The
operating system grants different 'access rights' to different subjects.
It needs the protection of computer resources like the software, memory, processor,
etc. Users should take protective measures as a helper to multiprogramming OS so
that multiple users may safely use a common logical namespace like a directory or
data. Protection may be achieved by maintaining confidentiality, honesty and
availability in the OS. It is critical to secure the device from unauthorized access,
viruses, worms, and other malware.
The policies define how processes access the computer system's resources, such as
the CPU, memory, software, and even the operating system. It is the responsibility
of both the operating system designer and the app programmer. Although, these
policies are modified at any time.
Protection is a technique for protecting data and processes from harmful or
intentional infiltration. It contains protection policies either established by itself, set
by management or imposed individually by programmers to ensure that their
programs are protected to the greatest extent possible.
It also provides a multiprogramming OS with the security that its users expect when
sharing common space such as files or directories.
Role of Protection in Operating System
Its main role is to provide a mechanism for implementing policies that define the
use of resources in a computer system. Some rules are set during the system's
design, while others are defined by system administrators to secure their files and
programs.
Every program has distinct policies for using resources, and these policies may
change over time. Therefore, system security is not the responsibility of the system's
designer, and the programmer must also design the protection technique to protect
their system against infiltration.
Domain of Protection
Various domains of protection in operating system are as follows:
The protection policies restrict each process's access to its resource handling. A
process is obligated to use only the resources necessary to fulfil its task within the
time constraints and in the mode in which it is required. It is a process's protected
domain.
Processes and objects are abstract data types in a computer system, and theseobjects
have operations that are unique to them. A domain component is defined as
<object, {set of operations on object}>.
Protection in Operating System
Each domain comprises a collection of objects and the operations that may be
implemented on them. A domain could be made up of only one process, procedure,
or user. If a domain is linked with a procedure, changing the domain would mean
changing the procedure ID. Objects may share one or more common operations.
Association between Process and Domain
When processes have the necessary access rights, they can switch from one domain
to another. It could be of two types, as shown below.
1. Fixed or Static
In a fixed association, all access rights could be given to processes at the start.
However, the results in a large number of access rights for domain switching. As a
result, a technique of changing the domain's contents is found dynamically.
2. Changing or dynamic
A process may switch dynamically and creating a new domain in the process.
The network used for file transfers must be secure at all times. During the transfer,
no alien software should be able to harvest information from the network. It is
referred to as network sniffing, and it could be avoided by implementing encrypted
data transfer routes. Moreover, the OS should be capable of resisting forceful or
even accidental violations.
Passwords are a good authentication method, but they are the most common and
vulnerable. It is very easy to crack passwords.
Security measures at various levels are put in place to prevent malpractices, like no
one being allowed on the premises or access to the systems.
The best authentication techniques include a username-password combination, eye
retina scan, fingerprint, or even user cards to access the system.
System Authentication
One-time passwords, encrypted passwords, and cryptography are used to create a
strong password and a formidable authentication source.
1. One-time Password
2. Encrypted Passwords
3. Cryptography
It's another way to ensure that unauthorized users can't access data transferred over
a network. It aids in the data secure transmission. It introduces the concept of a key
to protecting the data. The key is crucial in this situation. When a user sends data,
he encodes it using a computer that has the key, and the receiver must decode the
data with the same key. As a result, even if the data is stolen in the middle of the
process, there's a good possibility the unauthorized user won't be able to access it.
F1 F2 F3 Printer
D1 read read
D2 print
D3 read execute
D4 read write read write
According to the above matrix: there are four domains and four objects- three
files(F1, F2, F3) and one printer. A process executing in D1 can read files F1 and
F3. A process executing in domain D4 has same rights as D1 but it can also write
onfiles. Printer can be accessed by only one process executing in domain D2.
The
F1 F2 F3 Printer D1 D2 D3 D4
D1 read read switch
D2 print switch switch
D3 read execute
D4 read write read write switch
According to the matrix: a process executing in domain D2 can switch to domain
D3 and D4. A process executing in domain D4 can switch to domain D1 and
process executing in domain D1 can switch to domain D2.