0% found this document useful (0 votes)
13 views

4 unit. pdf

Timestamp-based concurrency control can lead to non-recoverable schedules due to dirty reads, where a transaction reads uncommitted data from another transaction. This can be mitigated by buffering, which delays transactions that depend on uncommitted data until the relevant transactions commit or roll back. Lock-based protocols, such as Two-Phase Locking, ensure concurrency control by restricting access to data items, thus maintaining consistency and preventing conflicts among transactions.

Uploaded by

japoxe5540
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

4 unit. pdf

Timestamp-based concurrency control can lead to non-recoverable schedules due to dirty reads, where a transaction reads uncommitted data from another transaction. This can be mitigated by buffering, which delays transactions that depend on uncommitted data until the relevant transactions commit or roll back. Lock-based protocols, such as Two-Phase Locking, ensure concurrency control by restricting access to data items, thus maintaining consistency and preventing conflicts among transactions.

Uploaded by

japoxe5540
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

D

1. Explain why timestamp-based concurrency control allows schedules that are not recoverable.
Describe how it can be modified through buffering to disallow such schedules.
Timestamp-based concurrency control assigns a unique timestamp to each transaction at the time
of its initiation. This ensures that the relative ordering of transactions is determined by their
timestamps, with older transactions having priority over younger ones. However, this mechanism
can create schedules that are not recoverable, especially in cases of dirty reads. A dirty read
happens when a transaction reads data written by another transaction that has not yet committed.
If the transaction that made the modification is rolled back (after the other transaction has read it),
the data used by the transaction is inconsistent, resulting in an irrecoverable schedule.
For example, if Transaction writes a value to a data item and Transaction reads that value before
commits, a failure in could lead to an inconsistent database. If commits based on uncommitted
data from , recovery from this situation would be impossible since might have already committed
based on a value that is no longer valid.
This issue can be mitigated through buffering, where transactions that depend on uncommitted
data are delayed until the transaction they depend on either commits or rolls back. Buffering
essentially prevents a transaction from committing until all the transactions it depends on have
either committed or been rolled back, ensuring that only committed data is used by subsequent
transactions. This change helps in preventing dirty reads and, thus, ensures that schedules are
recoverable, meaning that the system can revert to a consistent state even after failures.

2. How do lock-based protocols ensure concurrency control in databases?


Lock-based protocols are fundamental in ensuring that concurrent transactions operate on a
database in a way that maintains consistency and prevents conflicts. A lock is a mechanism that
restricts access to a data item to ensure that only one transaction can modify or access the item at
a time, preventing undesirable outcomes like data corruption or inconsistencies.
There are primarily two types of locks used in lock-based concurrency control:
Shared Lock (S): This lock allows a transaction to read a data item but not write to it. Multiple
transactions can hold a shared lock on the same data item simultaneously.
Exclusive Lock (X): This lock allows a transaction to read and write a data item, but no other
transaction can acquire any lock on the item while it holds the exclusive lock.
Lock-based protocols, such as Two-Phase Locking (2PL), ensure that transactions acquire and
release locks in a way that guarantees serializability. The Two-Phase Locking protocol divides a
transaction into two phases:
Growing Phase: During this phase, a transaction can acquire new locks but cannot release any
locks.
Shrinking Phase: Once the transaction starts releasing locks, it cannot acquire any new locks.
Strict 2PL is a variation where exclusive locks are not released until the transaction commits,
ensuring that no other transaction can access the data until the transaction is completed. This
prevents cascading rollbacks (where the failure of one transaction leads to the failure of others) and
ensures that transactions are recoverable.
Through these protocols, lock-based mechanisms ensure that only one transaction can modify a
data item at a time, preserving database consistency and avoiding issues like dirty writes or lost
updates.
3. Explain in detail about Serializability.
Serializability is a fundamental concept in database concurrency control, ensuring that concurrent
transactions produce results that are equivalent to some serial execution of the transactions. In a
serial schedule, transactions are executed one after the other without any interleaving, meaning that
the operations of one transaction are fully completed before another transaction begins.
Serializability guarantees that even if transactions are executed concurrently, their effects are the
same as if they had been executed serially.
There are two types of serializability:
1. Conflict Serializability:
A schedule is conflict serializable if it can be transformed into a serial schedule by swapping only
those operations that do not conflict. Two operations conflict if they meet all of the following
conditions:
They belong to different transactions.
They operate on the same data item.
At least one of the operations is a write.
The most common way to test conflict serializability is by constructing a precedence graph or
serialization graph. In this graph, nodes represent transactions, and directed edges represent
conflicts between transactions. If the graph is acyclic, the schedule is conflict serializable. If the
graph contains cycles, the schedule is not conflict serializable and cannot be converted into a serial
schedule without violating serializability.
2. View Serializability:
A schedule is view serializable if it is view-equivalent to a serial schedule. Two schedules are view-
equivalent if they meet the following conditions:
Initial Read: If a transaction reads a data item for the first time in the schedule, it must read the
same value as in the serial schedule.
Updated Read: If a transaction reads a data item that was written by another transaction, it must
read the updated value as in the serial schedule.
Final Write: The last write on a data item in a schedule must be performed by the same transaction
as in the serial schedule.
View serializability is more flexible than conflict serializability, but it is also more complex to test. A
schedule that is conflict serializable is always view serializable, but the reverse is not necessarily
true.
Both types of serializability ensure the correctness of transactions in the database by preventing
anomalies like lost updates, dirty reads, and unrepeatable reads. In practice, databases use
techniques like 2PL or timestamp-based concurrency control to ensure that schedules are
serializable.

4. Define failure classification. What are the different types of failures in a database system?
Failures in a database system can be classified into several types based on their causes and
impacts:
1. Transaction Failures: These failures occur during the execution of a single transaction and may
result from various factors:
Logical Errors: These errors occur when a transaction cannot complete due to an internal issue,
such as an invalid input or violation of a constraint (e.g., trying to insert a record with a duplicate
primary key).
System Errors: These errors occur when the database system detects an issue, such as a deadlock
or an unexpected termination of a transaction due to resource contention or a software bug.
2. System Crashes: A system crash occurs when the database management system (DBMS)
crashes due to hardware failure, software bugs, or power loss. This can lead to the loss of data that
has not been written to permanent storage (e.g., in-memory data).
3. Disk Failures: Disk failures occur when the underlying storage device, such as a hard drive or
solid-state drive (SSD), fails due to physical damage, like a head crash, or due to issues in data
transfer. A disk failure can cause the loss of data or the corruption of stored data, which may be
irrecoverable if not properly backed up.
Understanding these different types of failures helps in implementing recovery mechanisms such
as log-based recovery, checkpointing, and rollback operations to ensure the integrity and
consistency of the database.
5. Differentiate between prevention and detection of deadlocks.
Deadlocks occur when two or more transactions are waiting indefinitely for each other to release
resources, such as locks, and thus cannot proceed. Handling deadlocks in a database system can
be done through either prevention or detection mechanisms:
1. Deadlock Prevention:
Wait-Die Scheme: In this scheme, older transactions are allowed to wait for resources held by
younger transactions, while younger transactions requesting the same resource are rolled back
("killed"). This prevents deadlocks by ensuring that a younger transaction cannot block an older one.
Wound-Wait Scheme: In this scheme, older transactions force younger transactions to roll back
when they request the same resource, ensuring that no younger transaction can block an older one.
The system "wounds" younger transactions that are blocked and "waits" for older transactions to
proceed.
Prevention techniques avoid deadlocks by designing transaction behavior in a way that makes it
impossible for deadlocks to occur. However, this comes at the cost of potentially higher overhead,
as transactions may be aborted and restarted frequently.
2. Deadlock Detection:
Deadlock detection involves periodically checking for cycles in a wait-for graph, which is a directed
graph that tracks the waiting relationships between transactions. If a cycle is detected, it indicates a
deadlock.
Once a deadlock is detected, one or more transactions in the cycle are selected to be rolled back to
break the deadlock. The choice of which transaction to abort may depend on factors like the
transaction's age, resources it holds, or the cost of aborting it.
Detection mechanisms provide more flexibility and efficiency by allowing transactions to continue
executing as long as no deadlock exists, but the system must periodically check for deadlocks,
which adds overhead.

You might also like