Critical Section in OS
Critical Section in OS
The critical section refers to a specific part of a program where shared resources
are accessed, and concurrent execution may lead to conflicts or inconsistencies. It
is essential for the operating system to provide mechanisms like locks and
semaphores to ensure proper synchronization and mutual exclusion in the critical
section. These safeguards prevent concurrent processes from interfering with
each other, maintaining the integrity of shared resources.
When there is more than one process accessing or modifying a shared resource at the same
time, then the value of that resource will be determined by the last process. This is called
the race condition.
Consider an example of two processes, p1 and p2. Let value=3 be a variable present in the
shared resource.
Let us consider the following actions are done by the two processes,
value+3 // process p1
value=6
value-3 // process p2
value=3
The original value of,value should be 6, but due to the interruption of the process p2, the
value is changed back to 3. This is the problem of synchronization.
The critical section problem is to make sure that only one process should be in a critical
section at a time. When a process is in the critical section, no other processes are allowed to
enter the critical section. This solves the race condition.
To effectively address the Critical Section Problem in operating systems, any solution must
meet three key requirements:
1. Mutual Exclusion: This means that when one process is executing within its critical
section, no other process should be allowed to enter its own critical section. This
ensures that shared resources are accessed by only one process at a time, preventing
conflicts and data corruption.
2. Progress: When no process is currently executing in its critical section, and there is a
process that wishes to enter its critical section, it should not be kept waiting
indefinitely. The system should enable processes to make progress, ensuring that they
eventually get a chance to access their critical sections.
3. Bounded Waiting: There must be a limit on the number of times a process can
execute in its critical section after another process has requested access to its critical
section but before that request is granted. This ensures fairness and prevents any
process from being starved of critical section access.
Various solutions have been developed to meet these requirements and manage the Critical
Section Problem. These solutions primarily use software-based locks for synchronization.
Here are some common approaches:
1. Test-and-Set: This method involves using a shared boolean variable, typically called
"lock," and the "test_and_set" instruction, which atomically sets the lock to true.
3. Mutex Locks: Mutex (short for mutual exclusion) locks provide functions like
"acquire()" and "release()" that execute atomically. These locks ensure that only one
process can acquire the lock at a time.
The essential principle across these solutions is to guarantee exclusive access to critical
sections while allowing processes to make progress and ensuring that no process is left
waiting indefinitely. The specific mechanisms and tools used may vary, but they all aim to
maintain the integrity of shared resources in the system.
In computer science and operating systems, managing critical sections is a crucial aspect of
ensuring concurrent programs run smoothly and without conflicts. Here are some effective
strategies for avoiding critical section problems:
1. Fine-Grained Locking:
Fine-grained locking involves breaking down resources into smaller, more specific units and
applying locks only to those units rather than a broad, all-encompassing lock. This allows for
increased concurrency as different processes can access different parts of the resource
simultaneously.
2. Lock Hierarchies:
Lock hierarchies establish a specific order in which locks must be acquired. This helps
prevent deadlocks, where two or more processes are unable to proceed because each is
waiting for the other to release a lock. By enforcing a consistent lock acquisition order,
deadlocks can be avoided.
3. Read-Write Locks:
Read-write locks differentiate between operations that only read data and those that write or
modify it. Multiple processes can hold the read lock simultaneously, enabling concurrent
reading. However, only one process can hold the write lock, ensuring exclusive access during
writes.
OCC is a technique used in database systems to manage concurrent access. It allows multiple
processes to read data without obtaining locks. When a process attempts to write, the system
checks if the data has been modified by another process. If not, the write proceeds; otherwise,
it is retried.
Lock-free and wait-free data structures are designed to operate without traditional locks.
Instead, they use atomic operations or specialized algorithms to ensure progress even in the
presence of concurrent access.
Implementing these strategies requires a deep understanding of the specific requirements and
constraints of the system in question. By carefully selecting and combining these techniques,
developers can create robust, concurrent programs that effectively avoid critical section
problems.
Like any synchronization mechanism, critical section come with their own set of advantages
and disadvantages.
Advantages:
Disadvantages:
1. Potential for Deadlocks: If not used with care, critical sections can lead to
deadlocks, where processes are unable to proceed because they are waiting for
resources held by other processes. Designing proper lock acquisition order is crucial
to avoid this issue.
3. Overhead of Locking and Unlocking: Acquiring and releasing locks within critical
sections incurs overhead. In highly concurrent systems, this can become a
performance bottleneck.