0% found this document useful (0 votes)
9 views

Critical Section in OS

The critical section in operating systems is a part of a program where shared resources are accessed, necessitating synchronization mechanisms like locks and semaphores to prevent conflicts. The critical section problem arises when multiple processes access shared resources simultaneously, leading to race conditions, and solutions must ensure mutual exclusion, progress, and bounded waiting. Strategies to avoid critical section problems include fine-grained locking, lock hierarchies, read-write locks, optimistic concurrency control, and lock-free data structures, each with its own advantages and disadvantages.

Uploaded by

Swathi Tamma
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Critical Section in OS

The critical section in operating systems is a part of a program where shared resources are accessed, necessitating synchronization mechanisms like locks and semaphores to prevent conflicts. The critical section problem arises when multiple processes access shared resources simultaneously, leading to race conditions, and solutions must ensure mutual exclusion, progress, and bounded waiting. Strategies to avoid critical section problems include fine-grained locking, lock hierarchies, read-write locks, optimistic concurrency control, and lock-free data structures, each with its own advantages and disadvantages.

Uploaded by

Swathi Tamma
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

Critical Section in OS

The critical section refers to a specific part of a program where shared resources
are accessed, and concurrent execution may lead to conflicts or inconsistencies. It
is essential for the operating system to provide mechanisms like locks and
semaphores to ensure proper synchronization and mutual exclusion in the critical
section. These safeguards prevent concurrent processes from interfering with
each other, maintaining the integrity of shared resources.

What is the Critical Section Problem in OS?

When there is more than one process accessing or modifying a shared resource at the same
time, then the value of that resource will be determined by the last process. This is called
the race condition.

Consider an example of two processes, p1 and p2. Let value=3 be a variable present in the
shared resource.

Let us consider the following actions are done by the two processes,

value+3 // process p1
value=6
value-3 // process p2
value=3

The original value of,value should be 6, but due to the interruption of the process p2, the
value is changed back to 3. This is the problem of synchronization.

The critical section problem is to make sure that only one process should be in a critical
section at a time. When a process is in the critical section, no other processes are allowed to
enter the critical section. This solves the race condition.

Solutions to the Critical Section Problem

To effectively address the Critical Section Problem in operating systems, any solution must
meet three key requirements:

1. Mutual Exclusion: This means that when one process is executing within its critical
section, no other process should be allowed to enter its own critical section. This
ensures that shared resources are accessed by only one process at a time, preventing
conflicts and data corruption.

2. Progress: When no process is currently executing in its critical section, and there is a
process that wishes to enter its critical section, it should not be kept waiting
indefinitely. The system should enable processes to make progress, ensuring that they
eventually get a chance to access their critical sections.
3. Bounded Waiting: There must be a limit on the number of times a process can
execute in its critical section after another process has requested access to its critical
section but before that request is granted. This ensures fairness and prevents any
process from being starved of critical section access.

Various solutions have been developed to meet these requirements and manage the Critical
Section Problem. These solutions primarily use software-based locks for synchronization.
Here are some common approaches:

1. Test-and-Set: This method involves using a shared boolean variable, typically called
"lock," and the "test_and_set" instruction, which atomically sets the lock to true.

2. Compare-and-Swap: Similar to "test_and-set," this approach also uses a shared


boolean variable but employs the "compare_and_swap" instruction. It sets the lock to
true only if the value passed to it matches an expected value.

3. Mutex Locks: Mutex (short for mutual exclusion) locks provide functions like
"acquire()" and "release()" that execute atomically. These locks ensure that only one
process can acquire the lock at a time.

4. Semaphores: Semaphores are more advanced synchronization tools. They use


"wait()" and "signal()" operations, executed atomically on a semaphore variable
(typically an integer). Semaphores can manage access to resources more flexibly.

5. Condition Variables: This approach maintains a queue of processes waiting to enter


their critical sections. It ensures orderly access by managing the waiting processes
based on certain conditions.

The essential principle across these solutions is to guarantee exclusive access to critical
sections while allowing processes to make progress and ensuring that no process is left
waiting indefinitely. The specific mechanisms and tools used may vary, but they all aim to
maintain the integrity of shared resources in the system.

Strategies For Avoiding Problems

In computer science and operating systems, managing critical sections is a crucial aspect of
ensuring concurrent programs run smoothly and without conflicts. Here are some effective
strategies for avoiding critical section problems:

1. Fine-Grained Locking:

Fine-grained locking involves breaking down resources into smaller, more specific units and
applying locks only to those units rather than a broad, all-encompassing lock. This allows for
increased concurrency as different processes can access different parts of the resource
simultaneously.
2. Lock Hierarchies:

Lock hierarchies establish a specific order in which locks must be acquired. This helps
prevent deadlocks, where two or more processes are unable to proceed because each is
waiting for the other to release a lock. By enforcing a consistent lock acquisition order,
deadlocks can be avoided.

3. Read-Write Locks:

Read-write locks differentiate between operations that only read data and those that write or
modify it. Multiple processes can hold the read lock simultaneously, enabling concurrent
reading. However, only one process can hold the write lock, ensuring exclusive access during
writes.

4. Optimistic Concurrency Control (OCC):

OCC is a technique used in database systems to manage concurrent access. It allows multiple
processes to read data without obtaining locks. When a process attempts to write, the system
checks if the data has been modified by another process. If not, the write proceeds; otherwise,
it is retried.

5. Lock-Free and Wait-Free Data Structures:

Lock-free and wait-free data structures are designed to operate without traditional locks.
Instead, they use atomic operations or specialized algorithms to ensure progress even in the
presence of concurrent access.

Implementing these strategies requires a deep understanding of the specific requirements and
constraints of the system in question. By carefully selecting and combining these techniques,
developers can create robust, concurrent programs that effectively avoid critical section
problems.

Advantages and Disadvantages of Critical Sections in Process


Synchronization

Like any synchronization mechanism, critical section come with their own set of advantages
and disadvantages.

Advantages:

1. Data Integrity: Critical sections provide a controlled environment where shared


resources can be accessed by only one process at a time. This ensures that data is
modified in a consistent and predictable manner, preventing conflicts and corruption.

2. Simplicity and Ease of Use: Implementing critical sections is often straightforward,


especially with the availability of synchronization primitives provided by operating
systems and programming languages. This makes them a convenient choice for
managing shared resources.
3. Predictable Execution: Critical sections allow developers to specify exactly which
parts of the code need to be executed exclusively. This level of control ensures that
processes do not interfere with each other, leading to more predictable program
behavior.

4. Compatibility with Legacy Code: Critical sections are a well-established


synchronization mechanism and are supported by a wide range of programming
languages and operating systems. This makes them compatible with existing
codebases and systems.

Disadvantages:

1. Potential for Deadlocks: If not used with care, critical sections can lead to
deadlocks, where processes are unable to proceed because they are waiting for
resources held by other processes. Designing proper lock acquisition order is crucial
to avoid this issue.

2. Reduced Concurrency: While a process is in a critical section, other processes that


require access to the same resource must wait. This can lead to reduced parallelism
and overall system throughput.

3. Overhead of Locking and Unlocking: Acquiring and releasing locks within critical
sections incurs overhead. In highly concurrent systems, this can become a
performance bottleneck.

4. Complexity in Debugging: Debugging programs with critical sections can be more


challenging, as issues related to race conditions and deadlocks may not always be
immediately apparent.

You might also like