0% found this document useful (0 votes)
20 views19 pages

OS Unit - 4

Uploaded by

ZINKAL PATEL
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views19 pages

OS Unit - 4

Uploaded by

ZINKAL PATEL
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Unit - 4

Race Condition:
A race condition is a situation that may occur inside a critical
section. This happens when the result of multiple thread
execution in critical section differs according to the order in
which the threads execute.
Race conditions in critical sections can be avoided if the
critical section is treated as an atomic instruction. Also,
proper thread synchronization using locks or atomic
variables can prevent race conditions.

Critical Section:

The critical section in a code segment where the shared


variables can be accessed. Atomic action is required in a
critical section i.e. only one process can execute in its critical
section at a time. All the other processes have to wait to
execute in their critical sections.

The critical section is given as follows:

do{
Entry Section
Critical Section
Exit Section
Remainder Section
} while (TRUE);
In the above diagram, the entry sections handles the entry into the critical
section. It acquires the resources needed for execution by the process. The exit
section handles the exit from the critical section. It releases the resources and
also informs the other processes that critical section is free.

Suppose a process calls a function that's supposed to increment a value (v)


stored in a database or some shared memory. This operation is not atomic,
meaning that it can be broken down into smaller steps. The function will read
the current value from database (A), store it in memory as a function variable
(B), increment it (C) and finally write the new value to database (D). This
function's execution is therefore a sequence of steps A-D.
Race condition will happen if process P1 calls this function and proceeds to
step C. Meanwhile, it gets preempted by the operating system and another
process P2 gets its chance to run. P2 calls the same function, completes all the
steps A-D and returns. When P1 resumes, it continues from step C using an old
value (v), not P2's result (v+1).

Race condition would not have happened if P1 had completed all steps without
preemption ; or P2 had been prevented from executing the function until P1
had completed all steps.

Mutual Exclusion:

A mutual exclusion (mutex) is a program object that


prevents simultaneous access to a shared resource. This
concept is used in concurrent programming with a critical
section, a piece of code in which processes or threads
access a shared resource. Only one thread owns the mutex
at a time, thus a mutex with a unique name is created when
a program starts. When a thread holds a resource, it has to
lock the mutex from other threads to prevent concurrent
access of the resource. Upon releasing the resource, the
thread unlocks the mutex.
 Non-Critical Section:

Operation is outside the critical section; the process is not using or


requesting the shared resource.

 Trying:

The process attempts to enter the critical section.

 Critical Section:

The process is allowed to access the shared resource in this


section.

 Exit:

The process leaves the critical section and makes the shared
resource available to other processes.
Hardware Solution:

There is no guarantee that the Software-based solution like Peterson’s will


work on modern architecture. Hence, certain hardware-based solution to
synchronization are proposed. Two such solutions are:
1. TestAndSet() instruction
2. Swap() instruction

Both these instructions are atomic instructions which means that when a
process is executing any of these instructions it cannot be preempted until the
instruction is complete.

1. TestAndSet() Instructions:

TestAndSet() instruction uses a boolean variable lock. The initial value of


the lock is false. The variable lock ensures mutual exclusion. If the value
of lock is false, this means that no process is in its critical section. Hence, the
value true means that some process is running in its critical section.

 Definition of TestAndSet() Instruction:

boolean TestAndSet(boolean *lock) {

boolean initial = *lock;

*lock = TRUE;

return initial;

}
 Implementation of TestAndSet() Instruction:

do {

while (TestAndSet(&lock));

critical section

lock = FALSE;

remainder section

} while(TRUE);

2. Swap() Instruction:

The swap() instruction uses two boolean variables lock and key.

 Definition of Swap() Instruction:


void Swap(boolean *a, boolean *b)

Boolean t = *a;

*a = *b;

*b = t;

}
 Implementation of Swap() Instruction:

do {

key = TRUE;

while (key == TRUE)

Swap(&lock, &key);

//critical section

lock = FALSE;

//remainder section

} while (TRUE);

Strict Alternation:

Strict Alternation Approach is the software mechanism implemented at user


mode. It is a busy waiting solution which can be implemented only for two
processes. In this approach, A turn variable is used which is actually a lock.

This approach can only be used for only two processes. In general, let the two
processes be Pi and Pj. They share a variable called turn variable. The pseudo
code of the program can be given as following.
 For Process Pi:

1. Non - CS
2. while (turn ! = I);
3. Critical Section
4. turn = j;
5. Non - CS

 For Process Pj:


1. Non - CS
2. while (turn ! = J);
3. Critical Section
4. turn = I;
5. Non - CS

The actual problem of the lock variable approach was the fact that the process
was entering in the critical section only when the lock variable is 1. More than
one process could see the lock variable as 1 at the same time hence the mutual
exclusion was not guaranteed there.

This problem is addressed in the turn variable approach. Now, A process can
enter in the critical section only in the case when the value of the turn variable
equal to the PID of the process.

There are only two values possible for turn variable, i or j. if its value is not i then
it will definitely be j or vice versa.

In the entry section, in general, the process Pi will not enter in the critical section
until its value is j or the process Pj will not enter in the critical section until its
value is I .

Initially, two processes Pi and Pj are available and want to execute into critical
section.
The turn variable is equal to i hence Pi will get the chance to enter into the
critical section. The value of Pi remains I until Pi finishes critical section.

Pi finishes its critical section and assigns j to turn variable. Pj will get the
chance to enter into the critical section. The value of turn remains j until Pj
finishes its critical section.
Peterson’s Solution:
The producer consumer problem (or bounded buffer problem) describes two
processes, the producer and the consumer, which share a common, fixed-size
buffer used as a queue. Producer produce an item and put it into buffer. If
buffer is already full then producer will have to wait for an empty block in
buffer. Consumer consume an item from buffer. If buffer is already empty
then consumer will have to wait for an item in buffer. Implement Peterson’s
Algorithm for the two processes using shared memory such that there is
mutual exclusion between them. The solution should have free from
synchronization problems.
The Producer Consumer Problem:

The Producer-Consumer problem is a classic synchronization problem in


operating systems.

The problem is defined as follows: there is a fixed-size buffer and a Producer


process, and a Consumer process.

The Producer process creates an item and adds it to the shared buffer.
The Consumer process takes items out of the shared buffer and “consumes”
them.
Certain conditions must be met by the Producer and the Consumer processes
to have consistent data synchronization:

1. The Producer process must not produce an item if the shared buffer is
full.

2. The Consumer process must not consume an item if the shared buffer is
empty.

3. Access to the shared buffer must be mutually exclusive; this means that
at any given instance, only one process should be able to access the
shared buffer and make changes to it.

Semaphores:

Semaphore was proposed by Dijkstra in 1965 which is a very significant


technique to manage concurrent processes by using a simple integer value,
which is known as a semaphore. Semaphore is simply an integer variable that
is shared between threads. This variable is used to solve the critical section
problem and to achieve process synchronization in the multiprocessing
environment.

Semaphores are of two types:

1. Binary Semaphore –
This is also known as mutex lock. It can have only two values – 0
and 1. Its value is initialized to 1. It is used to implement the
solution of critical section problems with multiple processes.

2. Counting Semaphore –
Its value can range over an unrestricted domain. It is used to
control access to a resource that has multiple instances.
 Some point regarding P and V operation:
1. P operation is also called wait, sleep, or down operation, and
V operation is also called signal, wake-up, or up operation.
2. Both operations are atomic and semaphore(s) is always
initialized to one. Here atomic means that variable on which
read, modify and update happens at the same time/moment
with no pre-emption i.e. in-between read, modify and update
no other operation is performed that may change the
variable.
3. A critical section is surrounded by both operations to
implement process synchronization. See the below image. The
critical section of Process P is in between P and V operation.

 Semaphores are integer variables that are used to solve the


critical section problem by using two atomic operations, wait and
signal that are used for process synchronization. The wait
operation decrements the value of its argument S, if it is positive.
If S is negative or zero, then no operation is performed.

Event Counters:

Performance counters are bits of code that monitor, count, or measure events
in software, which allow us to see patterns from a high-level view. They are
registered with the operating system during installation of the software,
allowing anyone with the proper permissions to view them.

Monitors:

The monitor is one of the ways to achieve Process synchronization. The


monitor is supported by programming languages to achieve mutual exclusion
between processes. For example Java Synchronized methods. Java provides
wait() and notify() constructs.
1. It is the collection of condition variables and procedures combined
together in a special kind of module or a package.
2. The processes running outside the monitor can’t access the internal
variable of the monitor but can call procedures of the monitor.
3. Only one process at a time can execute code inside monitors.
Syntax:

Message Passing:

Process communication is the mechanism provided by the operating system


that allows processes to communicate with each other. This communication
could involve a process letting another process know that some event has
occurred or transferring of data from one process to another. One of the
models of process communication is the message passing model.
In the above diagram, both the processes P1 and P2 can access the message
queue and store and retrieve data.

 Advantages of Message Passing Model:


 The message passing model is much easier to implement than the
shared memory model.
 It is easier to build parallel hardware using message passing model as it
is quite tolerant of higher communication latencies.

 Disadvantage of Message Passing Model:

 The message passing model has slower communication than the shared
memory model because the connection setup takes time.

Classical IPC Problems:

we will see number of classical problems of synchronization as examples of a


large class of concurrency-control problems. In our solutions to the problems,
we use semaphores for synchronization, since that is the traditional way to
present such solutions. However, actual implementations of these solutions
could use mutex locks in place of binary semaphores.

These problems are used for testing nearly every newly proposed
synchronization scheme.
 Reader’s & Writer Problem:

Suppose that a database is to be shared among several concurrent processes.


Some of these processes may want only to read the database, whereas others
may want to update (that is, to read and write) the database. We distinguish
between these two types of processes by referring to the former as readers
and to the latter as writers.
Precisely in OS we call this situation as the readers-writers problem. Problem
parameters:
 One set of data is shared among a number of processes.

 Once a writer is ready, it performs its write. Only one writer may
write at a time.

 If a process is writing, no other process can read it.

 If at least one reader is reading, no other process can write.

 Readers may not write and only read.

 Dinning Philosopher Problem:

The Dining Philosopher Problem states that K philosophers seated around a


circular table with one chopstick between each pair of philosophers. There is
one chopstick between each philosopher. A philosopher may eat if he can
pickup the two chopsticks adjacent to him. One chopstick may be picked up
by any one of its adjacent followers but not both. This problem involves the
allocation of limited resources to a group of processes in a deadlock-free and
starvation-free manner.
Scheduling:

 In computing, scheduling is the action of assigning resources to


perform tasks. The resources may be processors, network
links or expansion cards. The tasks may be threads, processes or
data flows.
 The scheduling activity is carried out by a process called scheduler.
Schedulers are often designed so as to keep all computer resources busy
(as in load balancing), allow multiple users to share system resources
effectively, or to achieve a target quality-of-service.
 Scheduling is fundamental to computation itself, and an intrinsic part of
the execution model of a computer system; the concept of scheduling
makes it possible to have computer multitasking with a single central
processing unit (CPU).
 The process scheduling is the activity of the process manager that
handles the removal of the running process from the CPU and the
selection of another process on the basis of a particular strategy.
 Process scheduling is an essential part of a Multiprogramming
operating systems. Such operating systems allow more than one
process to be loaded into the executable memory at a time and the
loaded process shares the CPU using time multiplexing.

Scheduling Algorithms:

A Process Scheduler schedules different processes to be assigned to the CPU


based on particular scheduling algorithms. There are six popular process
scheduling algorithms which we are going to discuss in this chapter −

 First-Come, First-Served (FCFS) Scheduling


 Shortest-Job-Next (SJN) Scheduling
 Priority Scheduling
 Shortest Remaining Time
 Round Robin (RR) Scheduling
 Multiple-Level Queues Scheduling

These algorithms are either non-preemptive or preemptive. Non-preemptive


algorithms are designed so that once a process enters the running state, it
cannot be preempted until it completes its allotted time, whereas the
preemptive scheduling is based on priority where a scheduler may preempt a
low priority running process anytime when a high priority process enters into a
ready state.

You might also like