0% found this document useful (0 votes)
10 views

notes

Notes

Uploaded by

huzaifmanzoor988
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

notes

Notes

Uploaded by

huzaifmanzoor988
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

what is process?

process life cycle or stages of process

A process is like a program that is running in a computer. A process is when that


program is actually running and using the computer resources, like memory and
CPU. In other words, the active state of a program is called the process. For
example, when you open an app, like a browser or a game, the computer takes
the program instructions and runs them. This running version of the program is
called a process.

1. New State-

A process is said to be in new state when a program present in the secondary memory is initiated for
execution.

2. Ready State-

A process moves from new state to ready state after it is loaded into the main memory and is ready for
execution.

In ready state, the process waits for its execution by the processor.

In multiprogramming environment, many processes may be present in the ready state.


3. Run State-

A process moves from ready state to run state after it is assigned the CPU for execution.

4. Terminate State-

A process moves from run state to terminate state after its execution is completed.

After entering the terminate state, context (PCB) of the process is deleted by the operating system.

5. Block Or Wait State-

A process moves from run state to block or wait state if it requires an I/O operation or some blocked
resource during its execution.

After the I/O operation gets completed or resource becomes available, the process moves to the ready
state.

6. Suspend Ready State-

A process moves from ready state to suspend ready state if a process with higher priority has to be
executed but the main memory is full.

Moving a process with lower priority from ready state to suspend ready state creates a room for higher
priority process in the ready state.

The process remains in the suspend ready state until the main memory becomes available.

When main memory becomes available, the process is brought back to the ready state.

6. Suspend Wait State-

A process moves from wait state to suspend wait state if a process with higher priority has to be
executed but the main memory is full.

Moving a process with lower priority from wait state to suspend wait state creates a room for higher
priority process in the ready state.

After the resource becomes available, the process is moved to the suspend ready state.

After main memory becomes available, the process is moved to the ready state.

What is CPU scheduling?


CPU scheduling is the process of deciding which process or task should be executed by the CPU at any
given time. The main goal of CPU scheduling is to maximize CPU utilization, minimize response time, &
ensure fair allocation of resources among different processes. In other words, it determines the order in
which processes are executed by the CPU.

The CPU scheduler, which is a part of the operating system, is responsible for selecting the next process
to be executed from the ready queue.

In short, CPU scheduling decides the order and priority of the processes to run and allocates the CPU
time based on various parameters such as CPU usage, throughput, turnaround, waiting time, and
response time.

Criteria of CPU Scheduling


1. CPU Utilization: It refers to the percentage of time the CPU is busy executing processes. The goal is to
maximize CPU utilization & minimize idle time.theoretically, CPU utilization can range from 0 to 100 but
in a real-time system, it varies from 40 to 90 percent depending on the load upon the system.

2. Throughput: It is the number of processes completed per unit time. A higher throughput indicates
that the system is able to execute more processes in a given time period.The throughput may vary
depending on the length or duration of the processes.

3. Turnaround Time:The time elapsed from the time of submission of a process to the time of
completion is known as the turnaround time. It is the total time taken by a process from its submission
to its completion, including waiting time & execution time. The aim is to minimize the average
turnaround time.

Turn Around Time = Completion Time – Arrival Time.

4. Waiting Time: A scheduling algorithm does not affect the time required to complete the process once
it starts execution.It is the amount of time a process spends waiting in the ready queue before it gets
executed. The objective is to minimize the average waiting time.

Waiting Time = Turnaround Time – Burst Time.

5.Responce time:In an interactive system, turn-around time is not the best criterion. A process may
produce some output fairly early and continue computing new results while previous results are being
output to the user. Thus another criterion is the time taken from submission of the process of the
request until the first response is produced. This measure is called response time.

Response Time = CPU Allocation Time(when the CPU was allocated for the first) – Arrival Time

6. Completion Time

The completion time is the time when the process stops executing, which means that the process has
completed its burst time and is completely executed.

7. Priority

If the operating system assigns priorities to processes, the scheduling mechanism should favor the
higher-priority processes.

Explain the working of bully algorithm with the help of example and a daigram
The Bully Algorithm is a popular method for electing a coordinator in distributed
systems. When nodes or processes in a system need a leader to manage tasks or
make decisions, this algorithm helps them choose one, even if some nodes fail.
The process involves nodes “bullying” each other by checking who has the highest
ID, and the one with the highest ID becomes the coordinator.

There can be three kinds of messages that nodes would exchange between each
other during the bully algorithm:

Election message

OK message

Coordinator message

Example

Suppose there are n different nodes with unique identifiers ranging from

0 to n−1

Given that 5 is the highest ID amongst the nodes, it is the leader. Assuming
that the leader crashes and node 2 are the first to notice the breakdown of the
leader,

the node with ID 2 initiates an election.

Accordingly, the node with ID 2 sends an election message to all nodes with an
ID greater than its own ID.
Nodes 3 and 4 both receive the election message. However, since node 5 is
crashed, it does not respond or receives the ping. Nodes 3 and 4 accordingly
initiate election iteratively by broadcasting election messages to nodes with IDs
greater than their own respective IDs.

Moreover, they respond with an OK message to the node that sent them a
request for election since they are not the nodes with the highest IDs. This
means that nodes 3 and 4 would confirm to node 2 that they are alive and
non-crashed.

Node 4 receives the election message and accordingly responds with an OK


message to node 3 to confirm its operating state. As of the previous case,
node 5 does not respond as it is unavailable.

On the other hand, node 4 has already broadcasted an election message to


node 5, and received no response. It simply figures out that node 5 has
crashed, and the new node with the highest ID is node 4.

Node 4 figures out that it is the node with the highest ID, then
sends a coordinator message to all of the alive nodes.

Consecutively, all nodes are updated with the new leader.


Election Algorithms: Election algorithms choose a process from a
group of processors to act as a coordinator. If the coordinator
process crashes due to some reasons, then a new coordinator is
elected on other processor. Election algorithm basically
determines where a new copy of the coordinator should be
restarted. Election algorithm assumes that every active process
in the system has a unique priority number. The process with
highest priority will be chosen as a new coordinator. Hence,
when a coordinator fails, this algorithm elects that active process
which has highest priority number.Then this number is send to
every active process in the distributed system. We have two
election algorithms for two different configurations of a
distributed system.

The Ring Algorithm – This algorithm applies to systems organized as


a ring(logically or physically). In this algorithm we assume that
the link between the process are unidirectional and every
process can message to the process on its right only. Data
structure that this algorithm uses is active list, a list that has a
priority number of all active processes in the system.

Algorithm –

If process P1 detects a coordinator failure, it creates new active


list which is empty initially. It sends election message to its
neighbour on right and adds number 1 to its active list.

If process P2 receives message elect from processes on left, it


responds in 3 ways:

(I) If message received does not contain 1 in active list then P1


adds 2 to its active list and forwards the message.
(II) If this is the first election message it has received or sent, P1
creates new active list with numbers 1 and 2. It then sends
election message 1 followed by 2.

(III) If Process P1 receives its own election message 1 then


active list for P1 now contains numbers of all the active
processes in the system. Now Process P1 detects highest priority
number from list and elects it as the new coordinator.

Demonstrate the solution to producer-consumer problem using


semaphore
The producer-consumer problem is a classic synchronization problem in
computer science, where there are two types of threads: producers and
consumers. Producers generate data items and put them into a buffer, while
consumers retrieve and process these data items from the buffer.

Problem statement

We have a shared buffer that is used to transfer data items between multiple
producer threads and multiple consumer threads. The buffer has a maximum
size, and producers should not overflow the buffer by adding data items when
it is full. Similarly, consumers should not attempt to retrieve data items from
an empty buffer.

To solve this problem, we need two counting semaphores – Full and


Empty. “Full” keeps track of some items in the buffer at any
given time and “Empty” keeps track of many unoccupied slots.

Initialization of semaphores

mutex = 1

Full = 0 // Initially, all slots are empty. Thus full slots are 0

Empty = n // All slots are empty initially

Solution for Producer


do{

//produce an item

wait(empty);

wait(mutex);

//place in buffer

signal(mutex);

signal(full);

}while(true)

When producer produces an item then the value of “empty” is


reduced by 1 because one slot will be filled now. The value of
mutex is also reduced to prevent consumer to access the buffer.
Now, the producer has placed the item and thus the value of
“full” is increased by 1. The value of mutex is also increased by 1
because the task of producer has been completed and consumer
can access the buffer.

Solution for Consumer

do{

wait(full);

wait(mutex);

// consume item from buffer

signal(mutex);

signal(empty);

}while(true)

As the consumer is removing an item from buffer, therefore the


value of “full” is reduced by 1 and the value is mutex is also
reduced so that the producer cannot access the buffer at this
moment. Now, the consumer has consumed the item, thus
increasing the value of “empty” by 1. The value of mutex is also
increased so that producer can access the buffer now

Explain the working of Cristian’s algorithm


Cristian’s Algorithm

Cristian’s Algorithm is a clock synchronization algorithm is used to synchronize


time with a time server by client processes. This algorithm works well with low-
latency networks where Round Trip Time is short as compared to accuracy while
redundancy-prone distributed systems/applications do not go hand in hand with
this algorithm. Here Round Trip Time refers to the time duration between the
start of a Request and the end of the corresponding Response.

Algorithm:

1) The process on the client machine sends the request for fetching clock
time(time at the server) to the Clock Server at time T_0 .

2) The Clock Server listens to the request made by the client process and returns
the response in form of clock server time.

3) The client process fetches the response from the Clock Server at time T_1
and calculates the synchronized client clock time using the formula given below.

where T_{CLIENT} refers to the synchronized clock time,

T_{SERVER} refers to the clock time returned by the server,

T_0 refers to the time at which request was sent by the client process,

T_1 refers to the time at which response was received by the client process

Working/Reliability of the above formula:

T_1 - T_0 refers to the combined time taken by the network and the server to
transfer the request to the server, process the request, and return the response
back to the client process, assuming that the network latency T_0 and T_1
are approximately equal.

The time at the client-side differs from actual time by at most (T_1 - T_0)/2
seconds. Using the above statement we can draw a conclusion that the error in
synchronization can be at most (T_1 - T_0)/2 seconds.

Hence,
Internal fragmentation External fragmentation

In internal fragmentation fixed- In external fragmentation, variable-


sized memory, blocks square sized memory blocks square
measure appointed to process. measure appointed to the method.

Internal fragmentation happens External fragmentation happens


when the method or process is when the method or process is
smaller than the memory. removed.

The solution of internal The solution to external


fragmentation is the best-fit fragmentation is compaction
block. and paging.

External fragmentation occurs


Internal fragmentation occurs
when memory is divided into
when memory is divided
variable size partitions based on
into fixed-sized partitions.
the size of processes.

The unused spaces formed


The difference between memory
between non-contiguous
allocated and required space or
memory fragments are too small to
memory is called Internal
serve a new process, which is
fragmentation.
called External fragmentation.

Internal fragmentation occurs External fragmentation occurs with


with paging and fixed segmentation and dynamic
partitioning. partitioning.
Internal fragmentation External fragmentation

It occurs on the allocation of a


process to a partition greater It occurs on the allocation of a
than the process’s requirement. process to a partition greater which
The leftover space causes is exactly the same memory space
degradation system as it is required.
performance.

It occurs in worst fit memory It occurs in best fit and first fit
allocation method. memory allocation method.

Sr No. Paging Segmentation

Non-Contiguous Non-contiguous
1
memory allocation memory allocation

Segmentation
Paging divides
divides program
2 program into fixed
into variable size
size pages.
segments.

Compiler is
3 OS is responsible
responsible.

Paging is faster Segmentation is


4
than segmentation slower than paging

Paging is closer to Segmentation is


5
Operating System closer to User
It suffers from It suffers from
6 internal external
fragmentation fragmentation

There is no There is no
7 external external
fragmentation fragmentation

Logical address is Logical address is


divided into page divided into
8
number and page segment number
offset and segment offset

Segment Table
Page table is used
maintains the
9 to maintain the
segment
page information.
information

Page table entry Segment table


has the frame entry has the base
number and some address of the
10
flag bits to segment and some
represent details protection bits for
about pages. the segments.

Page replacement algorithms

Deadlock

Already on notes

You might also like