FDBMS Unit 4, 5
FDBMS Unit 4, 5
As per the rule of first normal form, an attribute (column) of a table cannot hold multiple values.
It should hold only atomic values.
In our database, we have two people with the same name Robert Phil, but they live in different
places.
Fig: Composite key in Database
Hence, we require both Full Name and Address to identify a record uniquely. That is a composite
key.
It is clear that we can’t move forward to make our simple database in 2 nd Normalization form
unless we partition the table above.
We have divided our 1NF table into two tables viz. Table 1 and Table2. Table 1 contains
member information. Table 2 contains information on movies rented.
We have introduced a new column called Membership_id which is the primary key for table 1.
Records can be uniquely identified in Table 1 using membership id
: Database – Foreign Key: From the above table, Membership_ID is the Foreign Key
Membership Movies data
1 Pirates of the Caribbean
1 Clash of Titans
2 Forgetting Sarah Marshal
2 Daddy ‘s little Girl
3 Clash of Titans
Foreign Key references the primary key of another Table! It helps connect your Tables
A foreign key can have a different name from its primary key
It ensures rows in one table have corresponding rows in another
Unlike the Primary key, they do not have to be unique. Most often they aren’t
Foreign keys can be null even though primary keys can not
Membership
Foreign Key Movies data
1 Pirates of the Caribbean
1 Clash of Titans
2 Forgetting Sarah Marshal
2 Daddy ‘s little Girl
3 Clash of Titans
Consider the table 1. Changing the non-key column Full Name may change Salutation.
To move our 2NF table into 3NF, we again need to again divide our table.
We have again divided our tables and created a new table which stores salutation
There are no transitive functional dependencies, and hence our table is in 3NF
In Table 3 Salutation ID is primary key, and in Table 1 Salutation ID is foreign to primary key in
Table 3
Boyce Codd normal form (BCNF)
It is an advance version of 3NF that’s why it is also referred as 3.5NF. BCNF is stricter than 3NF. A
table complies with BCNF if it is in 3NF and for every functional dependency X->Y, X should be the
super key of the table.
Example: Suppose there is a company wherein employees work in more than one department.
They store the data like this:
The table is not in BCNF as neither emp_id nor emp_dept alone are keys.
To make the table comply with BCNF we can break the table in three tables like this:
emp_nationality table:
emp_id emp_nationality
1001 Austrian
1002 American
emp_dept table:
emp_id emp_dept
1001 Production and planning
1001 stores
1002 design and technical support
1002 Purchasing department
Functional dependencies:
emp_id -> emp_nationality
emp_dept -> {dept_type, dept_no_of_emp}
Candidate keys:
For first table: emp_id
For second table: emp_dept
For third table: {emp_id, emp_dept}
This is now in BCNF as in both the functional dependencies left side part is a key.
A good database design has many benefits and is a goal to achieve for every DBA −
Easy Retrieval of Information
If the design is developed properly, then it would be easier to retrieve information. Correct
design means the tables, constraints, and relationships created are flawless.
Easier Modification
Changes that you make to the value of a given field will not adversely affect the values of other
fields within the table.
Easy to Maintain
The database structure should be easy to maintain. The design is perfect if changes in one field is
not affecting changes in another field.
Information
With a good design, you can enhance the quality and consistency of information.
Well-designed Database
If the database is well defined, then the flaws and issues of a poorly designed database will not
need to be addressed.
A badly designed database has the following problems:
Related data is scattered over various tables. A change must be updated at many places.
It's possible that the information is only half present; it's there in one table, but missing in
another one.
Data is inconsistent or ambiguous (poly interpretable).
The database is unnecessary complex, hacked with lots of tricks. The database
designer done unnecessary complex things, where it could have been done so much
easier.
The database has 'hidden' information, for example by the sequence of rows in a table.
The database is slow, inflexible, hard to extend and cannot handle all real life situations.
Key Terms :
Multivalued Dependency
Trivial Functional Dependency
Non-Trivial Functional Dependency
Transitive Dependency
Example:
Car_model Maf_year Color
H001 2017 Metallic
H001 2017 Green
H005 2018 Metallic
H005 2018 Blue
H010 2015 Metallic
H033 2012 Gray
In this example, maf_year and color are independent of each other but dependent on
car_model. In this example, these two columns are said to be multivalue dependent on
car_model.
This dependence can be represented like this:
car_model -> maf_year
car_model-> colour
Trivial Functional Dependency in DBMS
The Trivial dependency is a set of attributes which are called a trivial if the set of attributes are
included in that attribute.
So, X -> Y is a trivial functional dependency if Y is a subset of X. Let’s understand with a Trivial
Functional Dependency Example.
For example:
Emp_id Emp_name
AS555 Harry
AS811 George
AS999 Kevin
Consider this table of with two columns Emp_id and Emp_name.
Example:
(Company} -> {CEO} (if we know the Company, we knows the CEO name)
But CEO is not a subset of Company, and hence it’s non-trivial functional dependency.
Example:
{Company} -> {CEO} (if we know the compay, we know its CEO’s name)
{ Company} -> {Age} should hold, that makes sense because if we know the company name, we
can know his age.
4.15 Armstrong’s Axioms in Functional Dependency
The term Armstrong axioms refer to the sound and complete set of inference rules or axioms,
introduced by William W. Armstrong, that is used to test the logical implication of functional
dependencies. If F is a set of functional dependencies then the closure of F, denoted as ,
is the set of all functional dependencies logically implied by F. Armstrong’s Axioms are a set of
rules, that when applied repeatedly, generates a closure of functional dependencies.
Axioms –
Secondary Rules : These rules can be derived from the above axioms.
UNIT-V
TRANSACTIONS
Transactions:
OR
A transaction is an action or series of actions that are being performed by a single user or
application program, which reads or updates the contents of the database.
A transaction can be defined as a logical unit of work on the database. This may be an entire
program, a piece of a program, or a single command (like the SQL commands such as INSERT or
UPDATE), and it may engage in any number of operations on the database. In the database
context, the execution of an application program can be thought of as one or more transactions
with non-database processing taking place in between.
Example: Suppose an employee of bank transfers Rs 800 from X's account to Y's account. This
small transaction contains several low-level tasks:
X – ACCOUNT Y-ACCOUNT
Open_Account(X) Open_Account(y)
Old_Balance = X.balance Old _Balance = Y.Balance
New_Balance = Old_Balance - 800 New_Balance = Old_Balance + 800
X.balance = New_Balance Y.balance = New_Balance
Close_Account(X) Close_Account(Y)
Operations of Transaction:
Read(X): Read operation is used to read the value of X from the database and stores it
in a buffer in main memory.
Write(X): Write operation is used to write the value back to the database from the
buffer.
Let's take an example to debit transaction from an account which consists of following
operations:
1. R(X);
2. X = X - 500;
3. W(X);
Let's assume the value of X before starting of the transaction is 4000.
The first operation reads X's value from database and stores it in a buffer.
The second operation will decrease the value of X by 500. So buffer will contain 3500.
The third operation will write the buffer's value to the database. So X's final value will be
3500.
But it may be possible that because of the failure of hardware, software or power, etc. that
transaction may fail before finished all the operations in the set.
For example: If in the above transaction, the debit transaction fails after executing operation 2
then X's value will remain 4000 in the database which is not acceptable by the bank.
Transaction property
The transaction has the four properties. These are used to maintain consistency in a database,
before and after the transaction.
Property of Transaction
1. Atomicity
2. Consistency
3. Isolation
4. Durability
Atomicity
It states that all operations of the transaction take place at once if not, the transaction is
aborted.
There is no midway, i.e., the transaction cannot occur partially. Each transaction is
treated as one unit and either run to completion or is not executed at all.
Example: Let's assume that following transaction T consisting of T1 and T2. A consists of Rs 600
and B consists of Rs 300. Transfer Rs 100 from account A to account B.
T1 T2
Read(A) Read(B)
A:=A-100 Y:=Y+100
Write(A) Write(B)
States of Transaction:
In a database, the transaction can be in one of the following states -
Active state
The active state is the first state of every transaction. In this state, the transaction is being
executed.
For example: Insertion or deletion or updating a record is done here. But all the records
are still not saved to the database.
Partially committed
In the partially committed state, a transaction executes its final operation, but the data is
still not saved to the database.
In the total mark calculation example, a final display of the total marks step is executed in
this state.
Committed
A transaction is said to be in a committed state if it executes all its operations successfully. In this
state, all the effects are now permanently saved on the database system.
Failed state
If any of the checks made by the database recovery system fails, then the transaction is said
to be in the failed state.
In the example of total mark calculation, if the database is not able to fire a query to fetch
the marks, then the transaction will fail to execute.
Aborted
If any of the checks fail and the transaction has reached a failed state then the database
recovery system will make sure that the database is in its previous consistent state. If not
then it will abort or roll back the transaction to bring the database into a consistent state.
If the transaction fails in the middle of the transaction then before executing the transaction,
all the executed transactions are rolled back to its consistent state.
After aborting the transaction, the database recovery module will select one of the two
operations:
Schedule
A series of operation from one transaction to another transaction is known as schedule. It is used
to preserve the order of the operation in each of the individual transaction.
1. Serial Schedule
The serial schedule is a type of schedule where one transaction is executed completely before
starting another transaction. In the serial schedule, when the first transaction completes its
cycle, then the next transaction is executed.
For example: Suppose there are two transactions T1 and T2 which have some operations. If it
has no interleaving of operations, then there are the following two possible outcomes:
1. Execute all the operations of T1 which was followed by all the operations of T2.
2. Execute all the operations of T1 which was followed by all the operations of T2.
In the given (a) figure, Schedule A shows the serial schedule where T1 followed by T2.
In the given (b) figure, Schedule B shows the serial schedule where T2 followed by T1.
2. Non-serial Schedule
If interleaving of operations is allowed, then there will be non-serial schedule.
It contains many possible orders in which the system can execute the individual
operations of the transactions.
In the given figure (c) and (d), Schedule C and Schedule D are the non-serial schedules. It
has interleaving of operations.
3. Serializable schedule
The serializability of schedules is used to find non-serial schedules that allow the
transaction to execute concurrently without interfering with one another.
It identifies which schedules are correct when executions of the transaction have
interleaving of their operations.
A non-serial schedule will be serializable if its result is equal to the result of its
transactions executed serially.
Here :
Schedule A and Schedule B are serial schedule.
Schedule C and Schedule D are Non-serial schedule.
Testing of Serializability
Assume a schedule S. For S, we construct a graph known as precedence graph. This graph
has a pair G = (V, E), where V consists a set of vertices, and E consists a set of edges. The
set of vertices is used to contain all the transactions participating in the schedule. The set
of edges is used to contain all edges Ti ->Tj for which one of the three conditions holds:
Precedence of Graph S :
If a precedence graph contains a single edge Ti → Tj, then all the instructions of Ti are
executed before the first instruction of Tj is executed.
If a precedence graph for schedule S contains a cycle, then S is non-serializable. If the
precedence graph has no cycle, then S is known as serializable.
The precedence graph for schedule S1 contains a cycle that's why Schedule S1 is non-serializable.
.
Explanation:
Read(A): In T4,no subsequent writes to A, so no new edges
Read(C): In T4, no subsequent writes to C, so no new edges
Write(A): A is subsequently read by T5, so add edge T4 → T5
Read(B): In T5,no subsequent writes to B, so no new edges
Write(C): C is subsequently read by T6, so add edge T4 → T6
Write(B): A is subsequently read by T6, so add edge T5 → T6
Write(C): In T6, no subsequent reads to C, so no new edges
Write(A): In T5, no subsequent reads to A, so no new edges
Write(B): In T6, no subsequent reads to B, so no new edges
The precedence graph for schedule S2 contains no cycle that's why ScheduleS2 is serializable.
Conflict Serializable Schedule
A schedule is called conflict serializability if after swapping of non-conflicting operations, it
can transform into a serial schedule.
The schedule will be a conflict serializable if it is conflict equivalent to a serial schedule.
Conflicting Operations
Schedule S2 is a serial schedule because, in this, all operations of T1 are performed before
starting any operation of T2. Schedule S1 can be transformed into a serial schedule by swapping
non-conflicting operations of S1.
T1 T2
Read(A)
Write(A)
Read(B)
Write(B)
Read(A)
Write(A)
Read(B)
Write(B)
View Serializability
o A schedule will view serializable if it is view equivalent to a serial schedule.
o If a schedule is conflict serializable, then it will be view serializable.
o The view serializable which does not conflict serializable contains blind writes.
View Equivalent
Two schedules S1 and S2 are said to be view equivalent if they satisfy the following conditions:
1. Initial Read
An initial read of both schedules must be the same. Suppose two schedule S1 and S2. In schedule
S1, if a transaction T1 is reading the data item A, then in S2, transaction T1 should also read A.
Above two schedules are view equivalent because Initial read operation in S1 is done by T1 and
in S2 it is also done by T1.
2. Updated Read
In schedule S1, if Ti is reading A which is updated by Tj then in S2 also, Ti should read A which is
updated by Tj
Above two schedules are not view equal because, in S1, T3 is reading A updated by T2 and in S2,
T3 is reading A updated by T1.
3. Final Write
A final write must be the same between both the schedules. In schedule S1, if a transaction T1
updates A at last then in S2, final writes operations should also be done by T1.
Above two schedules is view equal because Final write operation in S1 is done by T3 and in S2, the final
write operation is also done by T3.
Example:
Schedule S
= 3! = 6
S1 = <T1 T2 T3>
S2 = <T1 T3 T2>
S3 = <T2 T3 T1>
S4 = <T2 T1 T3>
S5 = <T3 T1 T2>
S6 = <T3 T2 T1>
Recoverability of Schedule
Sometimes a transaction may not execute completely due to a software issue, system crash or
hardware failure. In that case, the failed transaction has to be rollback. But some other
transaction may also have used value produced by the failed transaction. So we also have to
rollback those transactions.
The above table 1 shows a schedule which has two transactions. T1 reads and writes the value of
A and that value is read and written by T2. T2 commits but later on, T1 fails. Due to the failure,
we have to rollback T1. T2 should also be rollback because it reads the value written by T1, but
T2 can't be rollback because it already committed. So this type of schedule is known as
irrecoverable schedule.
Irrecoverable schedule: The schedule will be irrecoverable if Tj reads the updated value of Ti and
Tj committed before Ti commit.
The above table 2 shows a schedule with two transactions. Transaction T1 reads and writes A,
and that value is read and written by transaction T2. But later on, T1 fails. Due to this, we have to
rollback T1. T2 should be rollback because T2 has read the value written by T1. As it has not
committed before T1 commits so we can rollback transaction T2 as well. So it is recoverable with
cascade rollback.
Recoverable with cascading rollback: The schedule will be recoverable with cascading rollback
if Tj reads the updated value of Ti. Commit of Tj is delayed till commit of Ti.
The above Table 3 shows a schedule with two transactions. Transaction T1 reads and write A and
commits, and that value is read and written by T2. So this is a cascade less recoverable schedule.
Isolation levels define the degree to which a transaction must be isolated from the data
modifications made by any other transaction in the database system. A transaction isolation
level is defined by the following phenomena –
Dirty Read – A Dirty read is the situation when a transaction reads a data that has not
yet been committed. For example, let’s say transaction 1 updates a row and leaves it
uncommitted, meanwhile, Transaction 2 reads the updated row. If transaction 1 rolls back
the change, transaction 2 will have read data that is considered never to have existed.
Non Repeatable read – Non Repeatable read occurs when a transaction reads same
row twice, and get a different value each time. For example, suppose transaction T1 reads
data. Due to concurrency, another transaction T2 updates the same data and commit, Now
if transaction T1 rereads the same data, it will retrieve a different value.
Phantom Read – Phantom Read occurs when two same queries are executed, but the
rows retrieved by the two, are different. For example, suppose transaction T1 retrieves a
set of rows that satisfy some search criteria. Now, Transaction T2 generates some new rows
that match the search criteria for transaction T1. If transaction T1 re-executes the
statement that reads the rows, it gets a different set of rows this time.
Based on these phenomena, The SQL standard defines four isolation levels:
1. Read Uncommitted – Read Uncommitted is the lowest isolation level. In this level, one
transaction may read not yet committed changes made by other transaction, thereby
allowing dirty reads. In this level, transactions are not isolated from each other.
2. Read Committed – This isolation level guarantees that any data read is committed at
the moment it is read. Thus it does not allow dirty read. The transaction holds a read or
write lock on the current row, and thus prevent other transactions from reading, updating
or deleting it.
3. Repeatable Read – This is the most restrictive isolation level. The transaction holds read
locks on all rows it references and writes locks on all rows it inserts, updates, or deletes.
Since other transaction cannot read, update or delete these rows, consequently it avoids non-
repeatable read.
4. Serializable – This is the highest isolation level. A serializable execution is guaranteed to
be serializable. Serializable execution is defined to be an execution of operations in which
concurrently executing transactions appears to be serially executing.
A transaction is a unit of work that is performed against a database. Transactions are units or
sequences of work accomplished in a logical order, whether in a manual fashion by a user or
automatically by some sort of a database program. A transaction is the propagation of one or
more changes to the database.
4. ROLLBACK: If any error occurs with any of the SQL grouped statements, all changes need to
be aborted. The process of reversing changes is called rollback. This command can only be
used to undo transactions since the last COMMIT or ROLLBACK command was issued.
Syntax: ROLLBACK;
Example:
From the above example 1
Delete those records from the table which have age =20 and then ROLLBACK changes in the
database.
Queries:
DELETE FROM student WHERE AGE =20;
ROLLBACK;
OUTPUT:
ROLLBACK TO SAVEPOINT_NAME;
you can ROLLBACK to any SAVEPOINT at any time to return the appropriate data to its original
state.
Example:
From the above example Sample table1,
Delete those records from the table which have age = 20 and then ROLLBACK the changes in
the database by keeping Savepoints.
Queries:
SAVEPOINT SP1; //Savepoint created.
DELETE FROM Student WHERE AGE = 20; //deleted
SAVEPOINT SP2; //Savepoint created.
Here SP1 is first SAVEPOINT created before deletion.
In this example one deletion have taken place.
After deletion again SAVEPOINT SP2 is created.
Deletion have been taken place, let us assume that you have changed your mind and decided
to ROLLBACK to the SAVEPOINT that you identified as SP1 which is before deletion.
deletion is undone by this statement ,
ROLLBACK TO SP1;
//Rollback completed.
6. RELEASE SAVEPOINT:- This command is used to remove a SAVEPOINT that you have
created.
Syntax: RELEASE SAVEPOINT SAVEPOINT_NAME
Once a SAVEPOINT has been released, you can no longer use the ROLLBACK command to undo
transactions performed since the last SAVEPOINT.
It is used to initiate a database transaction and used to specify characteristics of the
transaction that follows.
Serializability by Locks-Locking: Lock-Based Protocol
In this type of protocol, any transaction cannot read or write data until it acquires an appropriate
lock on it. There are two types of lock:
1. Shared lock:
o It is also known as a Read-only lock. In a shared lock, the data item can only read by the
transaction.
o It can be shared between the transactions because when the transaction holds a lock,
then it can't update the data on the data item.
2. Exclusive lock:
o In the exclusive lock, the data item can be both reads as well as written by the
transaction.
o This lock is exclusive, and in this lock, multiple transactions do not modify the same data
simultaneously.
It is the simplest way of locking the data while transaction. Simplistic lock-based protocols allow
all the transactions to get the lock on the data before insert or delete or update on it. It will
unlock the data item after completing the transaction.
Growing phase: In the growing phase, a new lock on the data item may be acquired by the
transaction, but none can be released.
Shrinking phase: In the shrinking phase, existing lock held by the transaction may be released,
but no new locks can be acquired.
In the below example, if lock conversion is allowed then the following phase can happen:
1. Upgrading of lock (from S(a) to X (a)) is allowed in growing phase.
2. Downgrading of lock (from X(a) to S(a)) must be done in shrinking phase.
Example :
The following way shows how unlocking and locking work with 2-PL.
Transaction T1:
o Growing phase: from step 1-3
o Shrinking phase: from step 5-7
o Lock point: at 3
Transaction T2:
o Growing phase: from step 2-6
o Shrinking phase: from step 8-9
o Lock point: at 6
Strict Two-phase locking (Strict-2PL)
o The first phase of Strict-2PL is similar to 2PL. In the first phase, after acquiring all the
locks, the transaction continues to execute normally.
o The only difference between 2PL and strict 2PL is that Strict-2PL does not release a lock
after using it.
o Strict-2PL waits until the whole transaction to commit, and then it releases all the locks at
a time.
o Strict-2PL protocol does not have shrinking phase of lock release.
Timestamp Ordering Protocol
The Timestamp Ordering Protocol is used to order the transactions based on their
Timestamps. The order of transaction is nothing but the ascending order of the
transaction creation.
The priority of the older transaction is higher that's why it executes first. To determine
the timestamp of the transaction, this protocol uses system time or logical counter.
The lock-based protocol is used to manage the order between conflicting pairs among
transactions at the execution time. But Timestamp based protocols start working as soon
as a transaction is created.
Let's assume there are two transactions T1 and T2. Suppose the transaction T1 has
entered the system at 007 times and transaction T2 has entered the system at 009 times.
T1 has the higher priority, so it executes first as it is entered the system first.
The timestamp ordering protocol also maintains the timestamp of last 'read' and 'write'
operation on a data.
1. Check the following condition whenever a transaction Ti issues a Read (X) operation:
o If W_TS(X) >TS(Ti) then the operation is rejected.
o If W_TS(X) <= TS(Ti) then the operation is executed.
o Timestamps of all the data items are updated.
Where,
TS(TI) denotes the timestamp of the transaction Ti.
R_TS(X) denotes the Read time-stamp of data-item X.
W_TS(X) denotes the Write time-stamp of data-item X.
Validation (Ti): It contains the time when Ti finishes its read phase and starts its
validation phase.
This protocol is used to determine the time stamp for the transaction
for serialization using the time stamp of the validation phase, as it is
the actual phase which determines if the transaction will commit or
rollback.
Hence TS(T) = validation(T).
The serializability is determined during the validation process. It can't
be decided in advance.
While executing the transaction, it ensures a greater degree of
concurrency and also less number of conflicts.
Thus it contains transactions which have less number of rollbacks.