notes
notes
1. New State-
A process is said to be in new state when a program present in the secondary memory is initiated for
execution.
2. Ready State-
A process moves from new state to ready state after it is loaded into the main memory and is ready for
execution.
In ready state, the process waits for its execution by the processor.
A process moves from ready state to run state after it is assigned the CPU for execution.
4. Terminate State-
A process moves from run state to terminate state after its execution is completed.
After entering the terminate state, context (PCB) of the process is deleted by the operating system.
A process moves from run state to block or wait state if it requires an I/O operation or some blocked
resource during its execution.
After the I/O operation gets completed or resource becomes available, the process moves to the ready
state.
A process moves from ready state to suspend ready state if a process with higher priority has to be
executed but the main memory is full.
Moving a process with lower priority from ready state to suspend ready state creates a room for higher
priority process in the ready state.
The process remains in the suspend ready state until the main memory becomes available.
When main memory becomes available, the process is brought back to the ready state.
A process moves from wait state to suspend wait state if a process with higher priority has to be
executed but the main memory is full.
Moving a process with lower priority from wait state to suspend wait state creates a room for higher
priority process in the ready state.
After the resource becomes available, the process is moved to the suspend ready state.
After main memory becomes available, the process is moved to the ready state.
The CPU scheduler, which is a part of the operating system, is responsible for selecting the next process
to be executed from the ready queue.
In short, CPU scheduling decides the order and priority of the processes to run and allocates the CPU
time based on various parameters such as CPU usage, throughput, turnaround, waiting time, and
response time.
2. Throughput: It is the number of processes completed per unit time. A higher throughput indicates
that the system is able to execute more processes in a given time period.The throughput may vary
depending on the length or duration of the processes.
3. Turnaround Time:The time elapsed from the time of submission of a process to the time of
completion is known as the turnaround time. It is the total time taken by a process from its submission
to its completion, including waiting time & execution time. The aim is to minimize the average
turnaround time.
4. Waiting Time: A scheduling algorithm does not affect the time required to complete the process once
it starts execution.It is the amount of time a process spends waiting in the ready queue before it gets
executed. The objective is to minimize the average waiting time.
5.Responce time:In an interactive system, turn-around time is not the best criterion. A process may
produce some output fairly early and continue computing new results while previous results are being
output to the user. Thus another criterion is the time taken from submission of the process of the
request until the first response is produced. This measure is called response time.
Response Time = CPU Allocation Time(when the CPU was allocated for the first) – Arrival Time
6. Completion Time
The completion time is the time when the process stops executing, which means that the process has
completed its burst time and is completely executed.
7. Priority
If the operating system assigns priorities to processes, the scheduling mechanism should favor the
higher-priority processes.
Explain the working of bully algorithm with the help of example and a daigram
The Bully Algorithm is a popular method for electing a coordinator in distributed
systems. When nodes or processes in a system need a leader to manage tasks or
make decisions, this algorithm helps them choose one, even if some nodes fail.
The process involves nodes “bullying” each other by checking who has the highest
ID, and the one with the highest ID becomes the coordinator.
There can be three kinds of messages that nodes would exchange between each
other during the bully algorithm:
Election message
OK message
Coordinator message
Example
Suppose there are n different nodes with unique identifiers ranging from
0 to n−1
Given that 5 is the highest ID amongst the nodes, it is the leader. Assuming
that the leader crashes and node 2 are the first to notice the breakdown of the
leader,
Accordingly, the node with ID 2 sends an election message to all nodes with an
ID greater than its own ID.
Nodes 3 and 4 both receive the election message. However, since node 5 is
crashed, it does not respond or receives the ping. Nodes 3 and 4 accordingly
initiate election iteratively by broadcasting election messages to nodes with IDs
greater than their own respective IDs.
Moreover, they respond with an OK message to the node that sent them a
request for election since they are not the nodes with the highest IDs. This
means that nodes 3 and 4 would confirm to node 2 that they are alive and
non-crashed.
Node 4 figures out that it is the node with the highest ID, then
sends a coordinator message to all of the alive nodes.
Algorithm –
Problem statement
We have a shared buffer that is used to transfer data items between multiple
producer threads and multiple consumer threads. The buffer has a maximum
size, and producers should not overflow the buffer by adding data items when
it is full. Similarly, consumers should not attempt to retrieve data items from
an empty buffer.
Initialization of semaphores
mutex = 1
Full = 0 // Initially, all slots are empty. Thus full slots are 0
//produce an item
wait(empty);
wait(mutex);
//place in buffer
signal(mutex);
signal(full);
}while(true)
do{
wait(full);
wait(mutex);
signal(mutex);
signal(empty);
}while(true)
Algorithm:
1) The process on the client machine sends the request for fetching clock
time(time at the server) to the Clock Server at time T_0 .
2) The Clock Server listens to the request made by the client process and returns
the response in form of clock server time.
3) The client process fetches the response from the Clock Server at time T_1
and calculates the synchronized client clock time using the formula given below.
T_0 refers to the time at which request was sent by the client process,
T_1 refers to the time at which response was received by the client process
T_1 - T_0 refers to the combined time taken by the network and the server to
transfer the request to the server, process the request, and return the response
back to the client process, assuming that the network latency T_0 and T_1
are approximately equal.
The time at the client-side differs from actual time by at most (T_1 - T_0)/2
seconds. Using the above statement we can draw a conclusion that the error in
synchronization can be at most (T_1 - T_0)/2 seconds.
Hence,
Internal fragmentation External fragmentation
It occurs in worst fit memory It occurs in best fit and first fit
allocation method. memory allocation method.
Non-Contiguous Non-contiguous
1
memory allocation memory allocation
Segmentation
Paging divides
divides program
2 program into fixed
into variable size
size pages.
segments.
Compiler is
3 OS is responsible
responsible.
There is no There is no
7 external external
fragmentation fragmentation
Segment Table
Page table is used
maintains the
9 to maintain the
segment
page information.
information
Deadlock
Already on notes