DBMS Quick Guide
DBMS Quick Guide
ADVERTISEMENT
Previous Next
DBMS - Overview
Database is a collection of related data and data is a collection of facts and figures
that can be processed to produce information.
Mostly data represents recordable facts. Data aids in producing information, which is
based on facts. For example, if we have data about marks obtained by all students,
we can then conclude about toppers and average marks.
A database management system stores data in such a way that it becomes easier
to retrieve, manipulate, and produce information.
Characteristics
Traditionally, data was organized in file formats. DBMS was a new concept then, and
all the research was done to make it overcome the deficiencies in traditional style of
data management. A modern DBMS has the following characteristics −
ADVERTISEMENT
Users
A typical DBMS has users with different rights and permissions who use it for
different purposes. Some users retrieve data and some back it up. The users of a
DBMS can be broadly categorized as follows −
End Users − End users are those who actually reap the benefits of having a
DBMS. End users can range from simple viewers who pay attention to the logs
or market rates to sophisticated users such as business analysts.
DBMS - Architecture
The design of a DBMS depends on its architecture. It can be centralized or
decentralized or hierarchical. The architecture of a DBMS can be seen as either single
tier or multi-tier. An n-tier architecture divides the whole system into related but
independent n modules, which can be independently modified, altered, changed, or
replaced.
In 1-tier architecture, the DBMS is the only entity where the user directly sits on the
DBMS and uses it. Any changes done here will directly be done on the DBMS itself. It
does not provide handy tools for end-users. Database designers and programmers
normally prefer to use single-tier architecture.
If the architecture of DBMS is 2-tier, then it must have an application through which
the DBMS can be accessed. Programmers use 2-tier architecture where they access
the DBMS by means of an application. Here the application tier is entirely
independent of the database in terms of operation, design, and programming.
3-tier Architecture
A 3-tier architecture separates its tiers from each other based on the complexity of
the users and how they use the data present in the database. It is the most widely
used architecture to design a DBMS.
Database (Data) Tier − At this tier, the database resides along with its
query processing languages. We also have the relations that define the data
and their constraints at this level.
Application (Middle) Tier − At this tier reside the application server and the
programs that access the database. For a user, this application tier presents
an abstracted view of the database. End-users are unaware of any existence
of the database beyond the application. At the other end, the database tier is
not aware of any other user beyond the application tier. Hence, the application
layer sits in the middle and acts as a mediator between the end-user and the
database.
User (Presentation) Tier − End-users operate on this tier and they know
nothing about any existence of the database beyond this layer. At this layer,
multiple views of the database can be provided by the application. All views
are generated by applications that reside in the application tier.
The very first data model could be flat data-models, where all the data used are to
be kept in the same plane. Earlier data models were not so scientific, hence they
were prone to introduce lots of duplication and update anomalies.
ADVERTISEMENT
Entity-Relationship Model
Entity-Relationship (ER) Model is based on the notion of real-world entities and
relationships among them. While formulating real-world scenario into the database
model, the ER Model creates entity set, relationship set, general attributes and
constraints.
ER Model is best used for the conceptual design of a database.
ER Model is based on −
one to one
one to many
many to one
many to many
Relational Model
The most popular data model in DBMS is the Relational Model. It is more scientific a
model than others. This model is based on first-order predicate logic and defines a
table as an n-ary relation.
The main highlights of this model are −
Database Schema
A database schema is the skeleton structure that represents the logical view of the
entire database. It defines how the data is organized and how the relations among
them are associated. It formulates all the constraints that are to be applied on the
data.
A database schema defines its entities and the relationship among them. It contains
a descriptive detail of the database, which can be depicted by means of schema
diagrams. It’s the database designers who design the schema to help programmers
understand the database and make it useful.
A database schema can be divided broadly into two categories −
Logical Database Schema − This schema defines all the logical constraints
that need to be applied on the data stored. It defines tables, views, and
integrity constraints.
Database Instance
It is important that we distinguish these two terms individually. Database schema is
the skeleton of database. It is designed when the database doesn't exist at all. Once
the database is operational, it is very difficult to make any changes to it. A database
schema does not contain any data or information.
A database instance is a state of operational database with data at any given time. It
contains a snapshot of the database. Database instances tend to change with time. A
DBMS ensures that its every instance (state) is in a valid state, by diligently following
all the validations, constraints, and conditions that the database designers have
imposed.
DBMS - Data Independence
If a database system is not multi-layered, then it becomes difficult to make any
changes in the database system. Database systems are designed in multi-layers as
we learnt earlier.
ADVERTISEMENT
Data Independence
A database system normally contains a lot of data in addition to users’ data. For
example, it stores data about data, known as metadata, to locate and retrieve data
easily. It is rather difficult to modify or update a set of metadata once it is stored in
the database. But as a DBMS expands, it needs to change over time to satisfy the
requirements of the users. If the entire data is dependent, it would become a tedious
and highly complex job.
Metadata itself follows a layered architecture, so that when we change data at one
layer, it does not affect the data at another level. This data is independent but
mapped to each other.
For example, in case we want to change or upgrade the storage system itself −
suppose we want to replace hard-disks with SSD − it should not have any impact on
the logical data or schemas.
Entity
An entity can be a real-world object, either animate or inanimate, that can be easily
identifiable. For example, in a school database, students, teachers, classes, and
courses offered can be considered as entities. All these entities have some attributes
or properties that give them their identity.
An entity set is a collection of similar types of entities. An entity set may contain
entities with attribute sharing similar values. For example, a Students set may
contain all the students of a school; likewise a Teachers set may contain all the
teachers of a school from all faculties. Entity sets need not be disjoint.
Attributes
Entities are represented by means of their properties, called attributes. All
attributes have values. For example, a student entity may have name, class, and age
as attributes.
There exists a domain or range of values that can be assigned to attributes. For
example, a student's name cannot be a numeric value. It has to be alphabetic. A
student's age cannot be negative, etc.
Types of Attributes
Derived attribute − Derived attributes are the attributes that do not exist in
the physical database, but their values are derived from other attributes
present in the database. For example, average_salary in a department should
not be saved directly in the database, instead it can be derived. For another
example, age can be derived from data_of_birth.
Single-value attribute − Single-value attributes contain single value. For
example − Social_Security_Number.
Multi-value attribute − Multi-value attributes may contain more than one
values. For example, a person can have more than one phone number,
email_address, etc.
Candidate Key − A minimal super key is called a candidate key. An entity set
may have more than one candidate key.
Primary Key − A primary key is one of the candidate keys chosen by the
database designer to uniquely identify the entity set.
Relationship
The association among entities is called a relationship. For example, an employee
works_at a department, a student enrolls in a course. Here, Works_at and Enrolls
are called relationships.
Relationship Set
Degree of Relationship
Binary = degree 2
Ternary = degree 3
n-ary = degree
Mapping Cardinalities
Cardinality defines the number of entities in one entity set, which can be associated
with the number of entities of other set via relationship set.
One-to-one − One entity from entity set A can be associated with at most
one entity of entity set B and vice versa.
One-to-many − One entity from entity set A can be associated with more
than one entities of entity set B however an entity from entity set B, can be
associated with at most one entity.
Many-to-one − More than one entities from entity set A can be associated
with at most one entity of entity set B, however an entity from entity set B
can be associated with more than one entity from entity set A.
Many-to-many − One entity from A can be associated with more than one
entity from B and vice versa.
ER Diagram Representation
Let us now learn how the ER Model is represented by means of an ER diagram. Any
object, for example, entities, attributes of an entity, relationship sets, and attributes
of relationship sets, can be represented with the help of an ER diagram.
Entity
Entities are represented by means of rectangles. Rectangles are named with the
entity set they represent.
Attributes
Attributes are the properties of entities. Attributes are represented by means of
ellipses. Every ellipse represents one attribute and is directly connected to its entity
(rectangle).
If the attributes are composite, they are further divided in a tree like structure.
Every node is then connected to its attribute. That is, composite attributes are
represented by ellipses that are connected with an ellipse.
Multivalued attributes are depicted by double ellipse.
Relationship
Relationships are represented by diamond-shaped box. Name of the relationship is
written inside the diamond-box. All the entities (rectangles) participating in a
relationship, are connected to it by a line.
Many-to-one − When more than one instance of entity is associated with the
relationship, it is marked as 'N:1'. The following image reflects that more than
one instance of an entity on the left and only one instance of an entity on the
right can be associated with the relationship. It depicts many-to-one
relationship.
Many-to-many − The following image reflects that more than one instance
of an entity on the left and more than one instance of an entity on the right
can be associated with the relationship. It depicts many-to-many relationship.
Participation Constraints
Generalization Aggregation
Let us now learn how the ER Model is represented by means of an ER diagram. Any
object, for example, entities, attributes of an entity, relationship sets, and attributes
of relationship sets, can be represented with the help of an ER diagram.
Entity
Entities are represented by means of rectangles. Rectangles are named with the
entity set they represent.
Attributes
Attributes are the properties of entities. Attributes are represented by means of
ellipses. Every ellipse represents one attribute and is directly connected to its entity
(rectangle).
If the attributes are composite, they are further divided in a tree like structure.
Every node is then connected to its attribute. That is, composite attributes are
represented by ellipses that are connected with an ellipse.
Relationship
Relationships are represented by diamond-shaped box. Name of the relationship is
written inside the diamond-box. All the entities (rectangles) participating in a
relationship, are connected to it by a line.
Many-to-one − When more than one instance of entity is associated with the
relationship, it is marked as 'N:1'. The following image reflects that more than
one instance of an entity on the left and only one instance of an entity on the
right can be associated with the relationship. It depicts many-to-one
relationship.
Many-to-many − The following image reflects that more than one instance
of an entity on the left and more than one instance of an entity on the right
can be associated with the relationship. It depicts many-to-many relationship.
Participation Constraints
Generalization
As mentioned above, the process of generalizing entities, where the generalized
entities contain the properties of all the generalized entities, is called generalization.
In generalization, a number of entities are brought together into one generalized
entity based on their similar characteristics. For example, pigeon, house sparrow,
crow and dove can all be generalized as Birds.
Specialization
Specialization is the opposite of generalization. In specialization, a group of entities
is divided into sub-groups based on their characteristics. Take a group ‘Person’ for
example. A person has name, date of birth, gender, etc. These properties are
common in all persons, human beings. But in a company, persons can be identified
as employee, employer, customer, or vendor, based on what role they play in the
company.
Similarly, in a school database, persons can be specialized as teacher, student, or a
staff, based on what role they play in school as entities.
Inheritance
We use all the above features of ER-Model in order to create classes of objects in
object-oriented programming. The details of entities are generally hidden from the
user; this process known as abstraction.
For example, the attributes of a Person class such as name, age, and gender can be
inherited by lower-level entities such as Student or Teacher.
Codd's 12 Rules
Dr Edgar F. Codd, after his extensive research on the Relational Model of database
systems, came up with twelve rules of his own, which according to him, a database
must obey in order to be regarded as a true relational database.
These rules can be applied on any database system that manages stored data using
only its relational capabilities. This is a foundation rule, which acts as a base for all
the other rules.
Concepts
Tables − In relational data model, relations are saved in the format of Tables. This
format stores the relation among entities. A table has rows and columns, where rows
represents records and columns represent the attributes.
Tuple − A single row of a table, which contains a single record for that relation is
called a tuple.
Relation schema − A relation schema describes the relation name (table name),
attributes, and their names.
Relation key − Each row has one or more attributes, known as relation key, which
can identify the row in the relation (table) uniquely.
Attribute domain − Every attribute has some pre-defined value scope, known as
attribute domain.
Constraints
Every relation has some conditions that must hold for it to be a valid relation. These
conditions are called Relational Integrity Constraints. There are three main
integrity constraints −
Key constraints
Domain constraints
Referential integrity constraints
Key Constraints
There must be at least one minimal subset of attributes in the relation, which can
identify a tuple uniquely. This minimal subset of attributes is called key for that
relation. If there are more than one such minimal subsets, these are called
candidate keys.
in a relation with a key attribute, no two tuples can have identical values for
key attributes.
a key attribute can not have NULL values.
Domain Constraints
Attributes have specific values in real-world scenario. For example, age can only be a
positive integer. The same constraints have been tried to employ on the attributes of
a relation. Every attribute is bound to have a specific range of values. For example,
age cannot be less than zero and telephone numbers cannot contain a digit outside
0-9.
Referential integrity constraints work on the concept of Foreign Keys. A foreign key is
a key attribute of a relation that can be referred in other relation.
Relational Algebra
Relational database systems are expected to be equipped with a query language that
can assist its users to query the database instances. There are two kinds of query
languages − relational algebra and relational calculus.
Relational Algebra
Relational algebra is a procedural query language, which takes instances of relations
as input and yields instances of relations as output. It uses operators to perform
queries. An operator can be either unary or binary. They accept relations as their
input and yield relations as their output. Relational algebra is performed recursively
on a relation and intermediate results are also considered relations.
Select
Project
Union
Set different
Cartesian product
Rename
Notation − σp(r)
Where σ stands for selection predicate and r stands for relation. p is prepositional
logic formula which may use connectors like and, or, and not. These terms may use
relational operators like − =, ≠, ≥, < , >, ≤.
For example −
σsubject="database"(Books)
Output − Selects tuples from books where subject is 'database' and 'price' is 450.
Notation − ∏A , A , A (r)
1 2 n
For example −
Selects and projects columns named as subject and author from the relation Books.
r ∪ s = { t | t ∈ r or t ∈ s}
Notation − r U s
Where r and s are either database relations or relation result set (temporary
relation).
Output − Projects the names of the authors who have either written a book or an
article or both.
Notation − r − s
Output − Provides the name of authors who have written books but not articles.
Notation − r Χ s
r Χ s = { q t | q ∈ r and t ∈ s}
Output − Yields a relation, which shows all the books and articles written by
tutorialspoint.
Set intersection
Assignment
Natural join
Relational Calculus
In contrast to Relational Algebra, Relational Calculus is a non-procedural query
language, that is, it tells what to do but never explains how to do it.
Notation − {T | Condition}
For example −
Output − Returns tuples with 'name' from Author who has written article on
'database'.
TRC can be quantified. We can use Existential (∃) and Universal Quantifiers (∀).
For example −
Output − The above query will yield the same result as the previous one.
Notation −
Where a1, a2 are attributes and P stands for formulae built by inner attributes.
For example −
Output − Yields Article, Page, and Subject from the relation TutorialsPoint, where
subject is database.
Just like TRC, DRC can also be written using existential and universal quantifiers.
DRC also involves relational operators.
The expression power of Tuple Relation Calculus and Domain Relation Calculus is
equivalent to Relational Algebra.
There are several processes and algorithms available to convert ER Diagrams into
Relational Schema. Some of them are automated and some of them are manual. We
may focus here on the mapping diagram contents to relational basics.
Mapping Entity
An entity is a real-world object with some attributes.
Mapping Relationship
A relationship is an association among entities.
Mapping Process
Mapping Process
SQL Overview
SQL is a programming language for Relational Databases. It is designed over
relational algebra and tuple relational calculus. SQL comes as a package with all
major distributions of RDBMS.
SQL comprises both data definition and data manipulation languages. Using the data
definition properties of SQL, one can design and modify database schema, whereas
data manipulation properties allows SQL to store and retrieve data from database.
CREATE
For example −
DROP
ALTER
For example−
This command adds an attribute in the relation article with the name subject of
string type.
SELECT/FROM/WHERE
INSERT INTO/VALUES
UPDATE/SET/WHERE
DELETE FROM/WHERE
These basic constructs allow database programmers and users to enter data and
information into the database and retrieve efficiently using a number of filter options.
SELECT/FROM/WHERE
SELECT − This is one of the fundamental query command of SQL. It is similar
to the projection operation of relational algebra. It selects the attributes based
on the condition described by WHERE clause.
FROM − This clause takes a relation name as an argument from which
attributes are to be selected/projected. In case more than one relation names
are given, this clause corresponds to Cartesian product.
WHERE − This clause defines predicate or conditions, which must match in
order to qualify the attributes to be projected.
For example −
Select author_name
From book_author
Where age > 50;
This command will yield the names of authors from the relation book_author whose
age is greater than 50.
INSERT INTO/VALUES
This command is used for inserting values into the rows of a table (relation).
Syntax−
INSERT INTO table (column1 [, column2, column3 ... ]) VALUES (value1 [, value2, value3
Or
For example −
UPDATE/SET/WHERE
This command is used for updating or modifying the values of columns in a table
(relation).
Syntax −
UPDATE table_name SET column_name = value [, column_name = value ...] [WHERE con
For example −
DELETE/FROM/WHERE
This command is used for removing one or more rows from a table (relation).
Syntax −
For example −
DBMS - Normalization
Functional Dependency
Functional dependency (FD) is a set of constraints between two attributes in a
relation. Functional dependency says that if two tuples have same values for
attributes A1, A2,..., An, then those two tuples must have to have same values for
attributes B1, B2, ..., Bn.
Functional dependency is represented by an arrow sign (→) that is, X→Y, where X
functionally determines Y. The left-hand side attributes determine the values of
attributes on the right-hand side.
Armstrong's Axioms
If F is a set of functional dependencies then the closure of F, denoted as F+, is the
set of all functional dependencies logically implied by F. Armstrong's Axioms are a set
of rules, that when applied repeatedly, generates a closure of functional
dependencies.
Normalization
If a database design is not perfect, it may contain anomalies, which are like a bad
dream for any database administrator. Managing a database with anomalies is next
to impossible.
Update anomalies − If data items are scattered and are not linked to each
other properly, then it could lead to strange situations. For example, when we
try to update one data item having its copies scattered over several places, a
few instances get updated properly while a few others are left with old values.
Such instances leave the database in an inconsistent state.
Deletion anomalies − We tried to delete a record, but parts of it was left
undeleted because of unawareness, the data is also saved somewhere else.
Insert anomalies − We tried to insert data in a record that does not exist at
all.
Normalization is a method to remove all these anomalies and bring the database to a
consistent state.
Each attribute must contain only a single value from its pre-defined domain.
If we follow second normal form, then every non-prime attribute should be fully
functionally dependent on prime key attribute. That is, if X → A holds, then there
should not be any proper subset Y of X, for which Y → A also holds true.
We see here in Student_Project relation that the prime key attributes are Stu_ID and
Proj_ID. According to the rule, non-key attributes, i.e. Stu_Name and Proj_Name
must be dependent upon both and not on any of the prime key attribute individually.
But we find that Stu_Name can be identified by Stu_ID and Proj_Name can be
identified by Proj_ID independently. This is called partial dependency, which is not
allowed in Second Normal Form.
We broke the relation in two as depicted in the above picture. So there exists no
partial dependency.
X is a superkey or,
A is prime attribute.
We find that in the above Student_detail relation, Stu_ID is the key and only prime
key attribute. We find that City can be identified by Stu_ID as well as Zip itself.
Neither Zip is a superkey nor is City a prime attribute. Additionally, Stu_ID → Zip →
City, so there exists transitive dependency.
To bring this relation into third normal form, we break the relation into two relations
as follows −
In the above image, Stu_ID is the super-key in the relation Student_Detail and Zip is
the super-key in the relation ZipCodes. So,
and
Zip → City
DBMS - Joins
We understand the benefits of taking a Cartesian product of two relations, which
gives us all the possible tuples that are paired together. But it might not be feasible
for us in certain cases to take a Cartesian product where we encounter huge
relations with thousands of tuples having a considerable large number of attributes.
Notation
R1 ⋈θ R2
R1 and R2 are relations having attributes (A1, A2, .., An) and (B1, B2,.. ,Bn) such
that the attributes don’t have anything in common, that is R1 ∩ R2 = Φ.
Student
101 Alex 10
102 Maria 11
Subjects
Class Subject
10 Math
10 English
11 Music
11 Sports
Student_Detail =
STUDENT ⋈Student.Std = Subject.Class SUBJECT
Student_detail
Equijoin
When Theta join uses only equality comparison operator, it is said to be equijoin.
The above example corresponds to equijoin.
Natural join acts on those matching attributes where the values of attributes in both
the relations are same.
Courses
CS01 Database CS
ME01 Mechanics ME
EE01 Electronics EE
HoD
Dept Head
CS Alex
ME Maya
EE Mira
Courses ⋈ HoD
Dept CID Course Head
Outer Joins
Theta Join, Equijoin, and Natural Join are called inner joins. An inner join includes
only those tuples with matching attributes and the rest are discarded in the resulting
relation. Therefore, we need to use outer joins to include all the tuples from the
participating relations in the resulting relation. There are three kinds of outer joins −
left outer join, right outer join, and full outer join.
Left
A B
100 Database
101 Mechanics
102 Electronics
Right
A B
100 Alex
102 Maya
104 Mira
Courses HoD
A B C D
Courses HoD
A B C D
Courses HoD
A B C D
100 Database 100 Alex
The memory with the fastest access is the costliest one. Larger storage devices offer
slow speed and they are less expensive, however they can store huge volumes of
data as compared to CPU registers or cache memory.
Magnetic Disks
Hard disk drives are the most common secondary storage devices in present
computer systems. These are called magnetic disks because they use the concept of
magnetization to store information. Hard disks consist of metal disks coated with
magnetizable material. These disks are placed vertically on a spindle. A read/write
head moves in between the disks and is used to magnetize or de-magnetize the spot
under it. A magnetized spot can be recognized as 0 (zero) or 1 (one).
Hard disks are formatted in a well-defined order to store data efficiently. A hard disk
plate has many concentric circles on it, called tracks. Every track is further divided
into sectors. A sector on a hard disk typically stores 512 bytes of data.
RAID
RAID stands for Redundant Array of Independent Disks, which is a technology to
connect multiple secondary storage devices and use them as a single storage media.
RAID consists of an array of disks in which multiple disks are connected together to
achieve different goals. RAID levels define the use of disk arrays.
RAID 2 − RAID 2 records Error Correction Code using Hamming distance for
its data, striped on different disks. Like level 0, each data bit in a word is
recorded on a separate disk and ECC codes of the data words are stored on a
different set disks. Due to its complex structure and high cost, RAID 2 is not
commercially available.
RAID 3 − RAID 3 stripes the data onto multiple disks. The parity bit
generated for data word is stored on a different disk. This technique makes it
to overcome single disk failures.
RAID 4 − In this level, an entire block of data is written onto data disks and
then the parity is generated and stored on a different disk. Note that level 3
uses byte-level striping, whereas level 4 uses block-level striping. Both level 3
and level 4 require at least three disks to implement RAID.
RAID 5 − RAID 5 writes whole data blocks onto different disks, but the parity
bits generated for data block stripe are distributed among all the data disks
rather than storing them on a different dedicated disk.
File Organization
File Organization defines how file records are mapped onto disk blocks. We have four
types of File Organization to organize file records −
File Operations
Operations on database files can be broadly classified into two categories −
Update Operations
Retrieval Operations
Update operations change the data values by insertion, deletion, or update. Retrieval
operations, on the other hand, do not alter the data but retrieve them after optional
conditional filtering. In both types of operations, selection plays a significant role.
Other than creation and deletion of a file, there could be several operations, which
can be done on files.
Open − A file can be opened in one of the two modes, read mode or write
mode. In read mode, the operating system does not allow anyone to alter
data. In other words, data is read only. Files opened in read mode can be
shared among several entities. Write mode allows data modification. Files
opened in write mode can be read but cannot be shared.
Locate − Every file has a file pointer, which tells the current position where
the data is to be read or written. This pointer can be adjusted accordingly.
Using find (seek) operation, it can be moved forward or backward.
Read − By default, when files are opened in read mode, the file pointer
points to the beginning of the file. There are options where the user can tell
the operating system where to locate the file pointer at the time of opening a
file. The very next data to the file pointer is read.
Write − User can select to open a file in write mode, which enables them to
edit its contents. It can be deletion, insertion, or modification. The file pointer
can be located at the time of opening or can be dynamically changed if the
operating system allows to do so.
Close − This is the most important operation from the operating system’s
point of view. When a request to close a file is generated, the operating
system
The organization of data inside a file plays a major role here. The process to locate
the file pointer to a desired record inside a file various based on whether the records
are arranged sequentially or clustered.
DBMS - Indexing
We know that data is stored in the form of records. Every record has a key field,
which helps it to be recognized uniquely.
Indexing is defined based on its indexing attributes. Indexing can be of the following
types −
Primary Index − Primary index is defined on an ordered data file. The data
file is ordered on a key field. The key field is generally the primary key of the
relation.
Secondary Index − Secondary index may be generated from a field which is
a candidate key and has a unique value in every record, or a non-key with
duplicate values.
Clustering Index − Clustering index is defined on an ordered data file. The
data file is ordered on a non-key field.
Dense Index
Sparse Index
Dense Index
In dense index, there is an index record for every search key value in the database.
This makes searching faster but requires more space to store index records itself.
Index records contain search key value and a pointer to the actual record on the
disk.
Sparse Index
In sparse index, index records are not created for every search key. An index record
here contains a search key and an actual pointer to the data on the disk. To search a
record, we first proceed by index record and reach at the actual location of the data.
If the data we are looking for is not where we directly reach by following the index,
then the system starts sequential search until the desired data is found.
Multilevel Index
Index records comprise search-key values and data pointers. Multilevel index is
stored on the disk along with the actual database files. As the size of the database
grows, so does the size of the indices. There is an immense need to keep the index
records in the main memory so as to speed up the search operations. If single-level
index is used, then a large size index cannot be kept in memory which leads to
multiple disk accesses.
Multi-level Index helps in breaking down the index into several smaller indices in
order to make the outermost level so small that it can be saved in a single disk
block, which can easily be accommodated anywhere in the main memory.
B+ Tree
A B+ tree is a balanced binary search tree that follows a multi-level index format.
The leaf nodes of a B+ tree denote actual data pointers. B+ tree ensures that all leaf
nodes remain at the same height, thus balanced. Additionally, the leaf nodes are
linked using a link list; therefore, a B+ tree can support random access as well as
sequential access.
Structure of B+ Tree
Every leaf node is at equal distance from the root node. A B+ tree is of the order n
where n is fixed for every B+ tree.
Internal nodes −
Internal (non-leaf) nodes contain at least ⌈n/2⌉ pointers, except the root node.
At most, an internal node can contain n pointers.
Leaf nodes −
Leaf nodes contain at least ⌈n/2⌉ record pointers and ⌈n/2⌉ key values.
At most, a leaf node can contain n record pointers and n key values.
Every leaf node contains one block pointer P to point to next leaf node and
forms a linked list.
B+ Tree Insertion
B+ trees are filled from bottom and each entry is done at the leaf node.
If a leaf node overflows −
B+ Tree Deletion
If it is an internal node, delete and replace with the entry from the left
position.
If underflow occurs, distribute the entries from the nodes left to it.
DBMS - Hashing
For a huge database structure, it can be almost next to impossible to search all the
index values through all its level and then reach the destination data block to
retrieve the desired data. Hashing is an effective technique to calculate the direct
location of a data record on the disk without using index structure.
Hashing uses hash functions with search keys as parameters to generate the address
of a data record.
Hash Organization
Bucket − A hash file stores data in bucket format. Bucket is considered a unit
of storage. A bucket typically stores one complete disk block, which in turn
can store one or more records.
Hash Function − A hash function, h, is a mapping function that maps all the
set of search-keys K to the address where actual records are placed. It is a
function from search keys to bucket addresses.
Static Hashing
In static hashing, when a search-key value is provided, the hash function always
computes the same address. For example, if mod-4 hash function is used, then it
shall generate only 5 values. The output address shall always be same for that
function. The number of buckets provided remains unchanged at all times.
Operation
Bucket Overflow
The condition of bucket-overflow is known as collision. This is a fatal state for any
static hash function. In this case, overflow chaining can be used.
Overflow Chaining − When buckets are full, a new bucket is allocated for
the same hash result and is linked after the previous one. This mechanism is
called Closed Hashing.
Dynamic Hashing
The problem with static hashing is that it does not expand or shrink dynamically as
the size of the database grows or shrinks. Dynamic hashing provides a mechanism in
which data buckets are added and removed dynamically and on-demand. Dynamic
hashing is also known as extended hashing.
Hash function, in dynamic hashing, is made to produce a large number of values and
only a few are used initially.
Organization
The prefix of an entire hash value is taken as a hash index. Only a portion of the
hash value is used for computing bucket addresses. Every hash index has a depth
value to signify how many bits are used for computing a hash function. These bits
can address 2n buckets. When all these bits are consumed − that is, when all the
buckets are full − then the depth value is increased linearly and twice the buckets
are allocated.
Operation
Querying − Look at the depth value of the hash index and use those bits to
compute the bucket address.
Update − Perform a query as above and update the data.
Deletion − Perform a query to locate the desired data and delete the same.
Insertion − Compute the address of the bucket
Else
If all the buckets are full, perform the remedies of static hashing.
Hashing is not favorable when the data is organized in some ordering and the
queries require a range of data. When data is discrete and random, hash performs
the best.
Hashing algorithms have high complexity than indexing. All hash operations are done
in constant time.
DBMS - Transaction
A transaction can be defined as a group of tasks. A single task is the minimum
processing unit which cannot be divided further.
A’s Account
Open_Account(A)
Old_Balance = A.balance
New_Balance = Old_Balance - 500
A.balance = New_Balance
Close_Account(A)
B’s Account
Open_Account(B)
Old_Balance = B.balance
New_Balance = Old_Balance + 500
B.balance = New_Balance
Close_Account(B)
ACID Properties
A transaction is a very small unit of a program and it may contain several lowlevel
tasks. A transaction in a database system must maintain Atomicity, Consistency,
Isolation, and Durability − commonly known as ACID properties − in order to ensure
accuracy, completeness, and data integrity.
Serializability
When multiple transactions are being executed by the operating system in a
multiprogramming environment, there are possibilities that instructions of one
transactions are interleaved with some other transaction.
Equivalence Schedules
An equivalence schedule can be of the following types −
Result Equivalence
If two schedules produce the same result after execution, they are said to be result
equivalent. They may yield the same result for some value and different results for
another set of values. That's why this equivalence is not generally considered
significant.
View Equivalence
Two schedules would be view equivalence if the transactions in both the schedules
perform similar actions in a similar manner.
For example −
If T reads the initial data in S1, then it also reads the initial data in S2.
If T reads the value written by J in S1, then it also reads the value written by
J in S2.
If T performs the final write on the data value in S1, then it also performs the
final write on the data value in S2.
Conflict Equivalence
Two schedules having multiple transactions with conflicting operations are said to be
conflict equivalent if and only if −
Note − View equivalent schedules are view serializable and conflict equivalent
schedules are conflict serializable. All conflict serializable schedules are view
serializable too.
States of Transactions
A transaction in a database can be in one of the following states −
Active − In this state, the transaction is being executed. This is the initial
state of every transaction.
Partially Committed − When a transaction executes its final operation, it is
said to be in a partially committed state.
Failed − A transaction is said to be in a failed state if any of the checks made
by the database recovery system fails. A failed transaction can no longer
proceed further.
Aborted − If any of the checks fails and the transaction has reached a failed
state, then the recovery manager rolls back all its write operations on the
database to bring the database back to its original state where it was prior to
the execution of the transaction. Transactions in this state are called aborted.
The database recovery module can select one of the two operations after a
transaction aborts −
Lock-based Protocols
Database systems equipped with lock-based protocols use a mechanism by which
any transaction cannot read or write data until it acquires an appropriate lock on it.
Locks are of two kinds −
Binary Locks − A lock on a data item can be in two states; it is either locked
or unlocked.
Shared/exclusive − This type of locking mechanism differentiates the locks
based on their uses. If a lock is acquired on a data item to perform a write
operation, it is an exclusive lock. Allowing more than one transaction to write
on the same data item would lead the database into an inconsistent state.
Read locks are shared because no data value is being changed.
Pre-claiming protocols evaluate their operations and create a list of data items on
which they need locks. Before initiating an execution, the transaction requests the
system for all the locks it needs beforehand. If all the locks are granted, the
transaction executes and releases all the locks when all its operations are over. If all
the locks are not granted, the transaction rolls back and waits until all the locks are
granted.
This locking protocol divides the execution phase of a transaction into three parts. In
the first part, when the transaction starts executing, it seeks permission for the locks
it requires. The second part is where the transaction acquires all the locks. As soon
as the transaction releases its first lock, the third phase starts. In this phase, the
transaction cannot demand any new locks; it only releases the acquired locks.
Two-phase locking has two phases, one is growing, where all the locks are being
acquired by the transaction; and the second phase is shrinking, where the locks held
by the transaction are being released.
To claim an exclusive (write) lock, a transaction must first acquire a shared (read)
lock and then upgrade it to an exclusive lock.
The first phase of Strict-2PL is same as 2PL. After acquiring all the locks in the first
phase, the transaction continues to execute normally. But in contrast to 2PL, Strict-
2PL does not release a lock after using it. Strict-2PL holds all the locks until the
commit point and releases all the locks at a time.
Timestamp-based Protocols
The most commonly used concurrency protocol is the timestamp based protocol. This
protocol uses either system time or logical counter as a timestamp.
Lock-based protocols manage the order between the conflicting pairs among
transactions at the time of execution, whereas timestamp-based protocols start
working as soon as a transaction is created.
Every transaction has a timestamp associated with it, and the ordering is determined
by the age of the transaction. A transaction created at 0002 clock time would be
older than all other transactions that come after it. For example, any transaction 'y'
entering the system at 0004 is two seconds younger and the priority would be given
to the older one.
In addition, every data item is given the latest read and write-timestamp. This lets
the system know when the last ‘read and write’ operation was performed on the data
item.
Operation rejected.
Operation executed.
Operation rejected.
This rule states if TS(Ti) < W-timestamp(X), then the operation is rejected and Ti is
rolled back.
Time-stamp ordering rules can be modified to make the schedule view serializable.
DBMS - Deadlock
In a multi-process system, deadlock is an unwanted situation that arises in a shared
resource environment, where a process indefinitely waits for a resource that is held
by another process.
For example, assume a set of transactions {T0, T1, T2, ...,Tn}. T0 needs a resource
X to complete its task. Resource X is held by T1, and T1 is waiting for a resource Y,
which is held by T2. T2 is waiting for resource Z, which is held by T0. Thus, all the
processes wait for each other to release resources. In this situation, none of the
processes can finish their task. This situation is known as a deadlock.
Deadlocks are not healthy for a system. In case a system is stuck in a deadlock, the
transactions involved in the deadlock are either rolled back or restarted.
Deadlock Prevention
To prevent any deadlock situation in the system, the DBMS aggressively inspects all
the operations, where transactions are about to execute. The DBMS inspects the
operations and analyzes if they can create a deadlock situation. If it finds that a
deadlock situation might occur, then that transaction is never allowed to be
executed.
There are deadlock prevention schemes that use timestamp ordering mechanism of
transactions in order to predetermine a deadlock situation.
Wait-Die Scheme
If TS(Ti) < TS(Tj) − that is Ti, which is requesting a conflicting lock, is older
than Tj − then Ti is allowed to wait until the data-item is available.
This scheme allows the older transaction to wait but kills the younger one.
Wound-Wait Scheme
If TS(Ti) < TS(Tj), then Ti forces Tj to be rolled back − that is Ti wounds Tj. Tj
is restarted later with a random delay but with the same timestamp.
If TS(Ti) > TS(Tj), then Ti is forced to wait until the resource is available.
This scheme, allows the younger transaction to wait; but when an older transaction
requests an item held by a younger one, the older transaction forces the younger
one to abort and release the item.
In both the cases, the transaction that enters the system at a later stage is aborted.
Deadlock Avoidance
Aborting a transaction is not always a practical approach. Instead, deadlock
avoidance mechanisms can be used to detect any deadlock situation in advance.
Methods like "wait-for graph" are available but they are suitable for only those
systems where transactions are lightweight having fewer instances of resource. In a
bulky system, deadlock prevention techniques may work well.
Wait-for Graph
This is a simple method available to track if any deadlock situation may arise. For
each transaction entering into the system, a node is created. When a transaction Ti
requests for a lock on an item, say X, which is held by some other transaction Tj, a
directed edge is created from Ti to Tj. If Tj releases item X, the edge between them
is dropped and Ti locks the data item.
The system maintains this wait-for graph for every transaction waiting for some data
items held by others. The system keeps checking if there's any cycle in the graph.
First, do not allow any request for an item, which is already locked by another
transaction. This is not always feasible and may cause starvation, where a
transaction indefinitely waits for a data item and can never acquire it.
The second option is to roll back one of the transactions. It is not always
feasible to roll back the younger transaction, as it may be important than the
older one. With the help of some relative algorithm, a transaction is chosen,
which is to be aborted. This transaction is known as the victim and the
process is known as victim selection.
Recovery
When the system recovers from a failure, it can restore the latest dump.
It can maintain a redo-list and an undo-list as checkpoints.
It can recover the system by consulting undo-redo lists to restore the state of
all transactions up to the last checkpoint.
Grown-up databases are too bulky to be frequently backed up. In such cases, we
have techniques where we can restore a database just by looking at its logs. So, all
that we need to do here is to take a backup of all the logs at frequent intervals of
time. The database can be backed up once a week, and the logs being very small can
be backed up every day or as frequently as possible.
Remote Backup
Remote backup provides a sense of security in case the primary location where the
database is located gets destroyed. Remote backup can be offline or real-time or
online. In case it is offline, it is maintained manually.
Online backup systems are more real-time and lifesavers for database administrators
and investors. An online backup system is a mechanism where every bit of the real-
time data is backed up simultaneously at two distant places. One of them is directly
connected to the system and the other one is kept at a remote place as backup.
As soon as the primary database storage fails, the backup system senses the failure
and switches the user system to the remote storage. Sometimes this is so instant
that the users can’t even realize a failure.
Crash Recovery
DBMS is a highly complex system with hundreds of transactions being executed
every second. The durability and robustness of a DBMS depends on its complex
architecture and its underlying hardware and system software. If it fails or crashes
amid transactions, it is expected that the system would follow some sort of algorithm
or techniques to recover lost data.
Failure Classification
To see where the problem has occurred, we generalize a failure into various
categories, as follows −
Transaction failure
A transaction has to abort when it fails to execute or when it reaches a point from
where it can’t go any further. This is called transaction failure where only a few
transactions or processes are hurt.
System Crash
There are problems − external to the system − that may cause the system to stop
abruptly and cause the system to crash. For example, interruptions in power supply
may cause the failure of underlying hardware or software failure.
Disk Failure
Disk failures include formation of bad sectors, unreachability to the disk, disk head
crash or any other failure, which destroys all or a part of disk storage.
Storage Structure
We have already described the storage system. In brief, the storage structure can be
divided into two categories −
It should check the states of all the transactions, which were being executed.
A transaction may be in the middle of some operation; the DBMS must ensure
the atomicity of the transaction in this case.
It should check whether the transaction can be completed now or it needs to
be rolled back.
No transactions would be allowed to leave the DBMS in an inconsistent state.
There are two types of techniques, which can help a DBMS in recovering as well as
maintaining the atomicity of a transaction −
Maintaining the logs of each transaction, and writing them onto some stable
storage before actually modifying the database.
Maintaining shadow paging, where the changes are done on a volatile
memory, and later, the actual database is updated.
Log-based Recovery
Log is a sequence of records, which maintains the records of actions performed by a
transaction. It is important that the logs are written prior to the actual modification
and stored on a stable storage media, which is failsafe.
<Tn, commit>
Checkpoint
Keeping and maintaining logs in real time and in real environment may fill out all the
memory space available in the system. As time passes, the log file may grow too big
to be handled at all. Checkpoint is a mechanism where all the previous logs are
removed from the system and stored permanently in a storage disk. Checkpoint
declares a point before which the DBMS was in consistent state, and all the
transactions were committed.