0% found this document useful (0 votes)
3 views

Database 2

The document provides lecture notes on Object-Oriented Databases (OOD), detailing their characteristics, advantages, and differences from relational databases. It discusses the components of OOD, their applications, and the significance of functional dependency and normalization in database design. The notes also cover various normal forms and join operations, emphasizing the complexities and benefits of using OOD in conjunction with object-oriented programming languages.

Uploaded by

Paul Oshos
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Database 2

The document provides lecture notes on Object-Oriented Databases (OOD), detailing their characteristics, advantages, and differences from relational databases. It discusses the components of OOD, their applications, and the significance of functional dependency and normalization in database design. The notes also cover various normal forms and join operations, emphasizing the complexities and benefits of using OOD in conjunction with object-oriented programming languages.

Uploaded by

Paul Oshos
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 45

DELTA CENTRAL COLLEGE OF MANAGEMENT AND

SCIENCE (DECCOMS)
UGHELLI, DELTA STATE.

in affiliation with,
TEMPLE GATE POLYTECHNIC
ABA, ABIA STATE.

LECTURE NOTES

ON

DATABASE DESIGN II
(COM 322)

BY

MR. PAUL APELEOKHA


CHAPTER ONE
OBJECT ORIENTED DATA MODEL
Object-oriented databases emerged to meet the need of coupling object-oriented
programming languages with a database. Although object-oriented databases have been
around since the late 1970s, they have seen relatively low adoption in recent decades
with the growing prevalence of functional programming languages and relational
databases. But a growing user community is awakening to its ability to deliver fast queries
with lighter code.

An object-oriented database (OOD) is a database system that can work with complex data
objects — that is, objects that mirror those used in object-oriented programming
languages.

In object-oriented programming, everything is an object, and many objects are quite


complex, having different properties and methods. An object-oriented database
management system works in concert with an object-oriented programming language to
facilitate the storage and retrieval of object-oriented data.

You might be thinking, “Wait, I use objects in my programming all the time. And I use a
database. So, does that mean the database I use is an OOD?” Probably not, and the
reason has to do with one of the main features of OOD: object data persistence.

When your program is running, you might have an object—for example, an instance of
a task. That object has properties like a name and status. It might also have some
methods like update_task() or get_task_history(). Somewhere early in your program, you
initialized that task object, and now you have access to it because it is stored in memory.

What happens when your program terminates execution? Your object… is gone. That
data object was transient, not persistent. The next time your program starts up, you’ll need
to retrieve those data values (like name and status) from your database all over again in
order to initialize a new task object instance.

With an OOD, data objects are stored with all of their properties in the database. When
your program terminates, the objects continue to persist, stored in the OOD. When your
program starts up again, it can retrieve an object with the properties from the database.
The process of storing and retrieving a complex data object with an OOD is transparent to
the user of the database.

This is quite different from relational databases (like MySQL or SQLite) but not
significantly from document databases (like MongoDB). In relational databases, the
developer needs to compose an object from the results of a set of queries, while
in document databases the mapping of the document fields to the class properties should
be almost transparent.
OODs have been around for decades. MongoDB Realm is one of the new and promising
pieces of software in that field.

Where Are Object-Oriented Databases Used?

OODs are most often used with object-oriented programming languages like Java, Kotlin,
C#, Node JS (React), and Swift. Industries that use OODs are typically those built on an
object-oriented language and wanting to boost productivity while working with complex
data structures.

One good example of an OOD is online IT training provider CBT Nuggets. CBT
Nuggets uses Realm to offer more than 5,000 courses ranging from basic computer skills
to complex network management (see this case study). CBT Nuggets works to ensure
subscribers can view content from anywhere, at any time. To meet this promise, classes
are delivered through streaming videos that range from 10- to 20-minute “nuggets.”
Content is available on both desktop and through a mobile app.

With the Realm and MongoDB Atlas Device Sync, CBT was able to develop using the
cross-platform SDKS for Android and iOS devices using direct object-to-business class
mapping and seamless sync to MongoDB Atlas.

Components of Object-Oriented Data Models

The elements of a OODM are:

 Object: A real world entity, such as a specific life task in a to-do list-“take the
garbage out”.
 Attributes and Methods: An object has state and behaviors. An object has
properties (which might also be called attributes) like name, status,
and create_date. The set of properties taken together represents its state. Along
with this, an object has behaviors (also known as methods, actions, or functions)
that modify or operate on its properties, like update_task() or get_task_history().
 Class: The grouping of all objects with the same properties and behaviors form
a class. In our example above, we talked about task objects. These objects
together all belong to the Task class.

class task
{
String name;
String status;
Date create_date;

public void update_task(String status)


{
...
}
}
 Object-Oriented Design Patterns: Object-oriented data modeling also implies
certain principles like inheritance, polymorphism, overriding, and association. An
object-oriented database system will support these same concepts.

Relational Versus Object-Oriented Databases

What Is the Difference Between a Relational Database and Object-Oriented?

Relational database management systems (RDBMS) work with tables, with each row in
the table representing a record. The columns in a row represent the attributes of an
individual record. Associations between records (“A Company has many Employees. An
Employee belongs to a Company”) are facilitated with foreign keys in one table
referencing IDs in another table. These associations make up the “relational” part of
relational databases.

Data values stored in relational databases are atomic and primitive. By primitive, we mean
that they are types like characters, text strings, numbers, and hashes. Even though
MySQL and SQLite support the JSON (JavaScript Object Notation) data type, that’s not
the same as supporting objects in the sense that OODs do.

Contrast this with the OOD, which typically stores and manages objects directly on the
database server’s disk. There are no tables, no rows, no columns, no foreign keys. There
are only objects.

Associations between objects in an OOD can also be established and persist, which can
lead to powerful and fast querying of data across complex relationships.
Is NoSQL an Object-Oriented Database?

Popular NoSQL databases like MongoDB and AWS DynamoDB are document-oriented
databases while others like Casandra are key-value stores. Document databases, like
OODs, don’t work in terms of tables, rows, and columns; but some languages might need
an ODM to better work with objects.

Every “record” is seen as a document, which can shrink and grow in terms of the
attributes it stores for a given entity. At times, relationships between documents might be
the preferred approach.

For example, you can think of a BlogPost document that has associations with
multiple Comment documents and Like documents. At other times, relationships can be
embedded directly into a document. In this case, you can imagine a BlogPost document
with a Comments attribute which is an array of text strings and usernames, and then
another Likes attribute which is an array of usernames and timestamps.

Document databases provide flexible structures that scale well horizontally. They can be
powerful in storing very complex documents which, on the surface, might seem like
they’re the same thing as objects — and many modern programming languages confuse
MongoDB further by calling these documents “objects.” However, these documents are
not objects in the traditional sense of object-oriented programming as described above.

Document databases are similar but not the same thing as object-oriented databases.

What Are the Advantages of Object-Oriented Databases?

With all of their complex associations to other objects, and because mplex data objects
persist in an OOD, the most significant advantage of the OOD over RDBMS is the ability
to query across these complex relationships very quickly.
 There are no slow “joins” as in an RDBMS. Instead, you have fast queries with
complex data.
 Since the database structure is so close to the programming objects, the code is
simpler and lighter.

As another example, we might think back to our task object instance, which cannot be
stored as-is in MySQL. It needs first to be decomposed into its attributes to be stored in
the table as a row with columns. The reverse process will involve
retrieval and composition. Not so with object-oriented or document databases. Have an
object? Store the whole thing in the database.

What Are the Disadvantages of Object-Oriented Databases?

An OOD may be a great choice if you’re using an object-oriented programming language


and need to manage complex data with complex object-to-object associations. Designing
and optimizing a database system for these kinds of complexities, however, also has its
trade-offs.

For one thing, the relative performance of very simple database operations — the ones
you might do for a simple lookup of an attribute from a relational database table— may be
sub-optimal.

Additionally, while users of RDBMS can enjoy a standard query language (SQL), users of
object-oriented database systems may not have widely adopted standards at their
disposal. For the most part, each flavor of OOD is coupled tightly to an object-oriented
programming language, and querying syntax is very language-dependent.

Lastly, the OOD user community still feels small in comparison to the exploding
ecosystem of web development within the RDBMS space. But the community is fast-
growing and likely to make up for lost time.

Who Needs Object-Oriented Databases?

If your application is built with an object-oriented language, then there is likely an OOD or
document DB that couples well with your language.

Functional Dependency

Functional dependency (FD) is a set of constraints between two attributes in a relation.


Functional dependency says that if two tuples have same values for attributes A1, A2,...,
An, then those two tuples must have to have same values for attributes B1, B2, ..., Bn.
Functional dependency is represented by an arrow sign (→) that is, X→Y, where X
functionally determines Y. The left-hand side attributes determine the values of attributes
on the right-hand side.

Armstrong's Axioms
If F is a set of functional dependencies then the closure of F, denoted as F +, is the set of
all functional dependencies logically implied by F. Armstrong's Axioms are a set of rules,
that when applied repeatedly, generates a closure of functional dependencies.
 Reflexive rule − If alpha is a set of attributes and beta is_subset_of alpha, then
alpha holds beta.
 Augmentation rule − If a → b holds and y is attribute set, then ay → by also holds.
That is adding attributes in dependencies, does not change the basic
dependencies.
 Transitivity rule − Same as transitive rule in algebra, if a → b holds and b → c
holds, then a → c also holds. a → b is called as a functionally that determines b.

Trivial Functional Dependency

 Trivial − If a functional dependency (FD) X → Y holds, where Y is a subset of X,


then it is called a trivial FD. Trivial FDs always hold.
 Non-trivial − If an FD X → Y holds, where Y is not a subset of X, then it is called a
non-trivial FD.
 Completely non-trivial − If an FD X → Y holds, where x intersects Y = Φ, it is said
to be a completely non-trivial FD.

Normalization

If a database design is not perfect, it may contain anomalies, which are like a bad dream
for any database administrator. Managing a database with anomalies is next to
impossible.
 Update anomalies − If data items are scattered and are not linked to each other
properly, then it could lead to strange situations. For example, when we try to
update one data item having its copies scattered over several places, a few
instances get updated properly while a few others are left with old values. Such
instances leave the database in an inconsistent state.
 Deletion anomalies − We tried to delete a record, but parts of it was left undeleted
because of unawareness, the data is also saved somewhere else.
 Insert anomalies − We tried to insert data in a record that does not exist at all.
Normalization is a method to remove all these anomalies and bring the database to a
consistent state.

First Normal Form

First Normal Form is defined in the definition of relations (tables) itself. This rule defines
that all the attributes in a relation must have atomic domains. The values in an atomic
domain are indivisible units.
We re-arrange the relation (table) as below, to convert it to First Normal Form.

Each attribute must contain only a single value from its pre-defined domain.

Second Normal Form

Before we learn about the second normal form, we need to understand the following −
 Prime attribute − An attribute, which is a part of the candidate-key, is known as a
prime attribute.
 Non-prime attribute − An attribute, which is not a part of the prime-key, is said to
be a non-prime attribute.
If we follow second normal form, then every non-prime attribute should be fully functionally
dependent on prime key attribute. That is, if X → A holds, then there should not be any
proper subset Y of X, for which Y → A also holds true.

We see here in Student_Project relation that the prime key attributes are Stu_ID and
Proj_ID. According to the rule, non-key attributes, i.e. Stu_Name and Proj_Name must be
dependent upon both and not on any of the prime key attribute individually. But we find
that Stu_Name can be identified by Stu_ID and Proj_Name can be identified by Proj_ID
independently. This is called partial dependency, which is not allowed in Second Normal
Form.

We broke the relation in two as depicted in the above picture. So there exists no partial
dependency.

Third Normal Form

For a relation to be in Third Normal Form, it must be in Second Normal form and the
following must satisfy −
 No non-prime attribute is transitively dependent on prime key attribute.
 For any non-trivial functional dependency, X → A, then either −
o X is a superkey or,

o A is prime attribute.

We find that in the above Student_detail relation, Stu_ID is the key and only prime key
attribute. We find that City can be identified by Stu_ID as well as Zip itself. Neither Zip is a
superkey nor is City a prime attribute. Additionally, Stu_ID → Zip → City, so there
exists transitive dependency.
To bring this relation into third normal form, we break the relation into two relations as
follows −
Boyce-Codd Normal Form

Boyce-Codd Normal Form (BCNF) is an extension of Third Normal Form on strict terms.
BCNF states that −
 For any non-trivial functional dependency, X → A, X must be a super-key.
In the above image, Stu_ID is the super-key in the relation Student_Detail and Zip is the
super-key in the relation ZipCodes. So,
Stu_ID → Stu_Name, Zip
and
Zip → City
Which confirms that both the relations are in BCNF.
We understand the benefits of taking a Cartesian product of two relations, which gives us
all the possible tuples that are paired together. But it might not be feasible for us in certain
cases to take a Cartesian product where we encounter huge relations with thousands of
tuples having a considerable large number of attributes.
Join is a combination of a Cartesian product followed by a selection process. A Join
operation pairs two tuples from different relations, if and only if a given join condition is
satisfied.
We will briefly describe various join types in the following sections.

Theta (θ) Join

Theta join combines tuples from different relations provided they satisfy the theta
condition. The join condition is denoted by the symbol θ.

R1 ⋈θ R2
Notation

R1 and R2 are relations having attributes (A1, A2, .., An) and (B1, B2,.. ,Bn) such that the
attributes don’t have anything in common, that is R1 ∩ R2 = Φ.
Theta join can use all kinds of comparison operators.
Student

SID Name Std

101 Alex 10

102 Maria 11

Subjects

Class Subject

10 Math

10 English

11 Music

11 Sports

Student_Detail −
STUDENT ⋈Student.Std = Subject.Class SUBJECT
Student_detail

SID Name Std Class Subject

101 Alex 10 10 Math

101 Alex 10 10 English

102 Maria 11 11 Music

102 Maria 11 11 Sports

Equijoin

When Theta join uses only equality comparison operator, it is said to be equijoin. The
above example corresponds to equijoin.

Natural Join (⋈)

Natural join does not use any comparison operator. It does not concatenate the way a
Cartesian product does. We can perform a Natural Join only if there is at least one
common attribute that exists between two relations. In addition, the attributes must have
the same name and domain.
Natural join acts on those matching attributes where the values of attributes in both the relations are
same.

Courses

CID Course Dept

CS01 Database CS

ME01 Mechanics ME

EE01 Electronics EE

HoD

Dept Head

CS Alex

ME Maya

EE Mira

Courses ⋈ HoD

Dept CID Course Head

CS CS01 Database Alex

ME ME01 Mechanics Maya

EE EE01 Electronics Mira

Outer Joins

Theta Join, Equijoin, and Natural Join are called inner joins. An inner join includes only
those tuples with matching attributes and the rest are discarded in the resulting relation.
Therefore, we need to use outer joins to include all the tuples from the participating
relations in the resulting relation. There are three kinds of outer joins − left outer join, right
outer join, and full outer join.

Left Outer Join(R S)

All the tuples from the Left relation, R, are included in the resulting relation. If there are tuples in R without
any matching tuple in the Right Relation S, then the S-attributes of the resulting relation are made NULL.

Left
A B

100 Database

101 Mechanics

102 Electronics

Right

A B

100 Alex

102 Maya

104 Mira

Courses HoD

A B C D

100 Database 100 Alex

101 Mechanics --- ---

102 Electronics 102 Maya

Right Outer Join: ( R S)

All the tuples from the Right relation, S, are included in the resulting relation. If there are tuples in S
without any matching tuple in R, then the R-attributes of resulting relation are made NULL.

Courses HoD

A B C D

100 Database 100 Alex

102 Electronics 102 Maya

--- --- 104 Mira

Full Outer Join: ( R S)

All the tuples from both participating relations are included in the resulting relation. If there are no
matching tuples for both relations, their respective unmatched attributes are made NULL.
Courses HoD

A B C D

100 Database 100 Alex

101 Mechanics --- ---

102 Electronics 102 Maya

--- --- 104 Mira

CHAPTER TWO
FILE AND PHYSICAL STORAGE
Databases are stored in file formats, which contain records. At physical level, the actual
data is stored in electromagnetic format on some device. These storage devices can be
broadly categorized into three types −

 Primary Storage − The memory storage that is directly accessible to the CPU
comes under this category. CPU's internal memory (registers), fast memory
(cache), and main memory (RAM) are directly accessible to the CPU, as they are
all placed on the motherboard or CPU chipset. This storage is typically very small,
ultra-fast, and volatile. Primary storage requires continuous power supply in order to
maintain its state. In case of a power failure, all its data is lost.
 Secondary Storage − Secondary storage devices are used to store data for future
use or as backup. Secondary storage includes memory devices that are not a part
of the CPU chipset or motherboard, for example, magnetic disks, optical disks
(DVD, CD, etc.), hard disks, flash drives, and magnetic tapes.
 Tertiary Storage − Tertiary storage is used to store huge volumes of data. Since
such storage devices are external to the computer system, they are the slowest in
speed. These storage devices are mostly used to take the back up of an entire
system. Optical disks and magnetic tapes are widely used as tertiary storage.

Memory Hierarchy

A computer system has a well-defined hierarchy of memory. A CPU has direct access to it
main memory as well as its inbuilt registers. The access time of the main memory is
obviously less than the CPU speed. To minimize this speed mismatch, cache memory is
introduced. Cache memory provides the fastest access time and it contains data that is
most frequently accessed by the CPU.
The memory with the fastest access is the costliest one. Larger storage devices offer slow
speed and they are less expensive, however they can store huge volumes of data as
compared to CPU registers or cache memory.

Magnetic Disks

Hard disk drives are the most common secondary storage devices in present computer
systems. These are called magnetic disks because they use the concept of magnetization
to store information. Hard disks consist of metal disks coated with magnetizable material.
These disks are placed vertically on a spindle. A read/write head moves in between the
disks and is used to magnetize or de-magnetize the spot under it. A magnetized spot can
be recognized as 0 (zero) or 1 (one).
Hard disks are formatted in a well-defined order to store data efficiently. A hard disk plate
has many concentric circles on it, called tracks. Every track is further divided
into sectors. A sector on a hard disk typically stores 512 bytes of data.

Redundant Array of Independent Disks

RAID or Redundant Array of Independent Disks, is a technology to connect multiple


secondary storage devices and use them as a single storage media.
RAID consists of an array of disks in which multiple disks are connected together to
achieve different goals. RAID levels define the use of disk arrays.
RAID 0
In this level, a striped array of disks is implemented. The data is broken down into blocks
and the blocks are distributed among disks. Each disk receives a block of data to
write/read in parallel. It enhances the speed and performance of the storage device.
There is no parity and backup in Level 0.
RAID 1
RAID 1 uses mirroring techniques. When data is sent to a RAID controller, it sends a copy
of data to all the disks in the array. RAID level 1 is also called mirroring and provides
100% redundancy in case of a failure.

RAID 2
RAID 2 records Error Correction Code using Hamming distance for its data, striped on
different disks. Like level 0, each data bit in a word is recorded on a separate disk and
ECC codes of the data words are stored on a different set disks. Due to its complex
structure and high cost, RAID 2 is not commercially available.

RAID 3
RAID 3 stripes the data onto multiple disks. The parity bit generated for data word is
stored on a different disk. This technique makes it to overcome single disk failures.

RAID 4
In this level, an entire block of data is written onto data disks and then the parity is
generated and stored on a different disk. Note that level 3 uses byte-level striping,
whereas level 4 uses block-level striping. Both level 3 and level 4 require at least three
disks to implement RAID.
RAID 5
RAID 5 writes whole data blocks onto different disks, but the parity bits generated for data
block stripe are distributed among all the data disks rather than storing them on a different
dedicated disk.

RAID 6
RAID 6 is an extension of level 5. In this level, two independent parities are generated and
stored in distributed fashion among multiple disks. Two parities provide additional fault
tolerance. This level requires at least four disk drives to implement RAID.

Relative data and information is stored collectively in file formats. A file is a sequence of
records stored in binary format. A disk drive is formatted into several blocks that can store
records. File records are mapped onto those disk blocks.
File Organization

File Organization defines how file records are mapped onto disk blocks. We have four
types of File Organization to organize file records −

Heap File Organization

When a file is created using Heap File Organization, the Operating System allocates
memory area to that file without any further accounting details. File records can be placed
anywhere in that memory area. It is the responsibility of the software to manage the
records. Heap File does not support any ordering, sequencing, or indexing on its own.

Sequential File Organization

Every file record contains a data field (attribute) to uniquely identify that record. In
sequential file organization, records are placed in the file in some sequential order based
on the unique key field or search key. Practically, it is not possible to store all the records
sequentially in physical form.

Hash File Organization

Hash File Organization uses Hash function computation on some fields of the records.
The output of the hash function determines the location of disk block where the records
are to be placed.

Clustered File Organization


Clustered file organization is not considered good for large databases. In this mechanism,
related records from one or more relations are kept in the same disk block, that is, the
ordering of records is not based on primary key or search key.

File Operations

Operations on database files can be broadly classified into two categories −


 Update Operations
 Retrieval Operations
Update operations change the data values by insertion, deletion, or update. Retrieval
operations, on the other hand, do not alter the data but retrieve them after optional
conditional filtering. In both types of operations, selection plays a significant role. Other
than creation and deletion of a file, there could be several operations, which can be done
on files.
 Open − A file can be opened in one of the two modes, read mode or write mode.
In read mode, the operating system does not allow anyone to alter data. In other
words, data is read only. Files opened in read mode can be shared among several
entities. Write mode allows data modification. Files opened in write mode can be
read but cannot be shared.
 Locate − Every file has a file pointer, which tells the current position where the
data is to be read or written. This pointer can be adjusted accordingly. Using find
(seek) operation, it can be moved forward or backward.
 Read − By default, when files are opened in read mode, the file pointer points to
the beginning of the file. There are options where the user can tell the operating
system where to locate the file pointer at the time of opening a file. The very next
data to the file pointer is read.
 Write − User can select to open a file in write mode, which enables them to edit its
contents. It can be deletion, insertion, or modification. The file pointer can be
located at the time of opening or can be dynamically changed if the operating
system allows to do so.
 Close − This is the most important operation from the operating system’s point of
view. When a request to close a file is generated, the operating system
o removes all the locks (if in shared mode),
o saves the data (if altered) to the secondary storage media, and
o releases all the buffers and file handlers associated with the file.
The organization of data inside a file plays a major role here. The process to locate the file
pointer to a desired record inside a file various based on whether the records are
arranged sequentially or clustered.
CHAPTER THREE
CONCEPT OF INDEXING AND HASHING
We know that data is stored in the form of records. Every record has a key field, which
helps it to be recognized uniquely.
Indexing is a data structure technique to efficiently retrieve records from the database files
based on some attributes on which the indexing has been done. Indexing in database
systems is similar to what we see in books.
Indexing is defined based on its indexing attributes. Indexing can be of the following types

 Primary Index − Primary index is defined on an ordered data file. The data file is
ordered on a key field. The key field is generally the primary key of the relation.
 Secondary Index − Secondary index may be generated from a field which is a
candidate key and has a unique value in every record, or a non-key with duplicate
values.
 Clustering Index − Clustering index is defined on an ordered data file. The data
file is ordered on a non-key field.
Ordered Indexing is of two types −
 Dense Index
 Sparse Index

Dense Index

In dense index, there is an index record for every search key value in the database. This
makes searching faster but requires more space to store index records itself. Index
records contain search key value and a pointer to the actual record on the disk.

Sparse Index
In sparse index, index records are not created for every search key. An index record here
contains a search key and an actual pointer to the data on the disk. To search a record,
we first proceed by index record and reach at the actual location of the data. If the data
we are looking for is not where we directly reach by following the index, then the system
starts sequential search until the desired data is found.

Multilevel Index

Index records comprise search-key values and data pointers. Multilevel index is stored on
the disk along with the actual database files. As the size of the database grows, so does
the size of the indices. There is an immense need to keep the index records in the main
memory so as to speed up the search operations. If single-level index is used, then a
large size index cannot be kept in memory which leads to multiple disk accesses.
Multi-level Index helps in breaking down the index into several smaller indices in order to
make the outermost level so small that it can be saved in a single disk block, which can
easily be accommodated anywhere in the main memory.

B+ Tree

A B+ tree is a balanced binary search tree that follows a multi-level index format. The leaf
nodes of a B+ tree denote actual data pointers. B+ tree ensures that all leaf nodes remain
at the same height, thus balanced. Additionally, the leaf nodes are linked using a link list;
therefore, a B+ tree can support random access as well as sequential access.
Structure of B+ Tree
Every leaf node is at equal distance from the root node. A B + tree is of the
order n where n is fixed for every B+ tree.

Internal nodes −
 Internal (non-leaf) nodes contain at least ⌈n/2⌉ pointers, except the root node.
 At most, an internal node can contain n pointers.
Leaf nodes −
 Leaf nodes contain at least ⌈n/2⌉ record pointers and ⌈n/2⌉ key values.
 At most, a leaf node can contain n record pointers and n key values.
 Every leaf node contains one block pointer P to point to next leaf node and forms a
linked list.
B+ Tree Insertion
 B+ trees are filled from bottom and each entry is done at the leaf node.
 If a leaf node overflows −

Partition at i = ⌊(m+1)/2⌋.
o Split node into two parts.
o
o First i entries are stored in one node.
o Rest of the entries (i+1 onwards) are moved to a new node.
o ith key is duplicated at the parent of the leaf.
 If a non-leaf node overflows −

Partition the node at i = ⌈(m+1)/2⌉.


o Split node into two parts.
o
o Entries up to i are kept in one node.
o Rest of the entries are moved to a new node.
B+ Tree Deletion
 B+ tree entries are deleted at the leaf nodes.
 The target entry is searched and deleted.
o If it is an internal node, delete and replace with the entry from the left
position.
 After deletion, underflow is tested,
o If underflow occurs, distribute the entries from the nodes left to it.
 If distribution is not possible from left, then
o Distribute from the nodes right to it.
 If distribution is not possible from left or from right, then
o Merge the node with left and right to it.
For a huge database structure, it can be almost next to impossible to search all the index
values through all its level and then reach the destination data block to retrieve the
desired data. Hashing is an effective technique to calculate the direct location of a data
record on the disk without using index structure.
Hashing uses hash functions with search keys as parameters to generate the address of a
data record.

Hash Organization

 Bucket − A hash file stores data in bucket format. Bucket is considered a unit of
storage. A bucket typically stores one complete disk block, which in turn can store
one or more records.
 Hash Function − A hash function, h, is a mapping function that maps all the set of
search-keys K to the address where actual records are placed. It is a function from
search keys to bucket addresses.

Static Hashing

In static hashing, when a search-key value is provided, the hash function always
computes the same address. For example, if mod-4 hash function is used, then it shall
generate only 5 values. The output address shall always be same for that function. The
number of buckets provided remains unchanged at all times.
Operation
 Insertion − When a record is required to be entered using static hash, the hash
function h computes the bucket address for search key K, where the record will be
stored.
Bucket address = h(K)
 Search − When a record needs to be retrieved, the same hash function can be
used to retrieve the address of the bucket where the data is stored.
 Delete − This is simply a search followed by a deletion operation.

Bucket Overflow

The condition of bucket-overflow is known as collision. This is a fatal state for any static
hash function. In this case, overflow chaining can be used.
 Overflow Chaining − When buckets are full, a new bucket is allocated for the
same hash result and is linked after the previous one. This mechanism is
called Closed Hashing.
 Linear Probing − When a hash function generates an address at which data is
already stored, the next free bucket is allocated to it. This mechanism is
called Open Hashing.

Dynamic Hashing

The problem with static hashing is that it does not expand or shrink dynamically as the
size of the database grows or shrinks. Dynamic hashing provides a mechanism in which
data buckets are added and removed dynamically and on-demand. Dynamic hashing is
also known as extended hashing.
Hash function, in dynamic hashing, is made to produce a large number of values and only
a few are used initially.
Organization

The prefix of an entire hash value is taken as a hash index. Only a portion of the hash
value is used for computing bucket addresses. Every hash index has a depth value to
signify how many bits are used for computing a hash function. These bits can address 2n
buckets. When all these bits are consumed − that is, when all the buckets are full − then
the depth value is increased linearly and twice the buckets are allocated.

Operation

 Querying − Look at the depth value of the hash index and use those bits to
compute the bucket address.
 Update − Perform a query as above and update the data.
 Deletion − Perform a query to locate the desired data and delete the same.
 Insertion − Compute the address of the bucket
o If the bucket is already full.
 Add more buckets.
 Add additional bits to the hash value.
 Re-compute the hash function.
o Else
 Add data to the bucket,
o If all the buckets are full, perform the remedies of static hashing.
Hashing is not favorable when the data is organized in some ordering and the queries
require a range of data. When data is discrete and random, hash performs the best.
Hashing algorithms have high complexity than indexing. All hash operations are done in
constant time.
CHAPTER FOUR
Query Processing in DBMS
Query Processing is the activity performed in extracting data from the database.
In query processing, it takes various steps for fetching the data from the
database. The steps involved are:

1. Parsing and translation

2. Optimization

3. Evaluation

The query processing works in the following way:

Parsing and Translation

As query processing includes certain activities for data retrieval. Initially, the
given user queries get translated in high-level database languages such as
SQL. It gets translated into expressions that can be further used at the physical
level of the file system. After this, the actual evaluation of the queries and a
variety of query -optimizing transformations and takes place. Thus before
processing a query, a computer system needs to translate the query into a
human-readable and understandable language. Consequently, SQL or
Structured Query Language is the best suitable choice for humans. But, it is
not perfectly suitable for the internal representation of the query to the system.
Relational algebra is well suited for the internal representation of a query. The
translation process in query processing is similar to the parser of a query.
When a user executes any query, for generating the internal form of the query,
the parser in the system checks the syntax of the query, verifies the
name of the relation in the database, the tuple, and finally the required attribute
value. The parser creates a tree of the query, known as 'parse-tree.' Further,
translate it into the form of relational algebra. With this, it evenly replaces all the
use of the views when used in the query.
Thus, we can understand the working of a query processing in the below-
described diagram:

Suppose a user executes a query. As we have learned that there are various
methods of extracting the data from the database. In SQL, a user wants to fetch
the records of the employees whose salary is greater than or equal to 10000.
For doing this, the following query is undertaken:

select emp_name from Employee where salary>10000;

Thus, to make the system understand the user query, it needs to be translated
in the form of relational algebra. We can bring this query in the relational
algebra form as:

o σsalary>10000 (πsalary (Employee))

o πsalary (σsalary>10000 (Employee))


After translating the given query, we can execute each relational algebra
operation by using different algorithms. So, in this way, a query processing
begins its working.

Evaluation

For this, with addition to the relational algebra translation, it is required to


annotate the translated relational algebra expression with the instructions used
for specifying and evaluating each operation. Thus, after translating the user
query, the system executes a query evaluation plan.

Query Evaluation Plan


o In order to fully evaluate a query, the system needs to construct a query
evaluation plan.
o The annotations in the evaluation plan may refer to the algorithms to be used for
the particular index or the specific operations.
o Such relational algebra with annotations is referred to as Evaluation Primitives.
The evaluation primitives carry the instructions needed for the evaluation of the
operation.
o Thus, a query evaluation plan defines a sequence of primitive operations
used for evaluating a query. The query evaluation plan is also referred to
as the query execution plan.
o A query execution engine is responsible for generating the output of the given
query. It takes the query execution plan, executes it, and finally makes the
output for the user query.

Optimization
o The cost of the query evaluation can vary for different types of queries. Although
the system is responsible for constructing the evaluation plan, the user does
need not to write their query efficiently.
o Usually, a database system generates an efficient query evaluation plan, which
minimizes its cost. This type of task performed by the database system and is
known as Query Optimization.
o For optimizing a query, the query optimizer should have an estimated cost
analysis of each operation. It is because the overall operation cost
depends on the memory allocations to several operations, execution costs,
and so on.

Finally, after selecting an evaluation plan, the system evaluates the query
and produces the output of the query.
TRANSACTIONS AND CONCURRENCY
A transaction can be defined as a group of tasks. A single task is the minimum
processing unit which cannot be divided further.
Let’s take an example of a simple transaction. Suppose a bank employee transfers
N500 from A's account to B's account. This very simple and small transaction involves
several low-level tasks.
A’s Account
Open_Account(A)
Old_Balance = A.balance
New_Balance = Old_Balance - 500
A.balance = New_Balance
Close_Account(A)
B’s Account
Open_Account(B)
Old_Balance = B.balance
New_Balance = Old_Balance + 500
B.balance = New_Balance
Close_Account(B)

ACID Properties
A transaction is a very small unit of a program and it may contain several low level
tasks. A transaction in a database system must
maintain Atomicity, Consistency, Isolation, and Durability − commonly known as ACID
properties − in order to ensure accuracy, completeness, and data integrity.
 Atomicity − This property states that a transaction must be treated as an atomic
unit, that is, either all of its operations are executed or none. There must be no
state in a database where a transaction is left partially completed. States should
be defined either before the execution of the transaction or after the
execution/abortion/failure of the transaction.
 Consistency − The database must remain in a consistent state after any
transaction. No transaction should have any adverse effect on the data residing in
the database. If the database was in a consistent state before the execution of a
transaction, it must remain consistent after the execution of the transaction as
well.
 Durability − The database should be durable enough to hold all its latest
updates even if the system fails or restarts. If a transaction updates a chunk of
data in a database and commits, then the database will hold the modified data. If
a transaction commits but the system fails before the data could be written on to
the disk, then that data will be updated once the system springs back into action.
 Isolation − In a database system where more than one transaction is being
executed simultaneously and in parallel, the property of isolation states that all
the transactions will be carried out and executed as if it is the only transaction in
the system. No transaction will affect the existence of any other transaction.

Serializability
When multiple transactions are being executed by the operating system in a
multiprogramming environment, there are possibilities that instructions of one
transactions are interleaved with some other transaction.
 Schedule − A chronological execution sequence of a transaction is called a
schedule. A schedule can have many transactions in it, each comprising of a
number of instructions/tasks.
 Serial Schedule − It is a schedule in which transactions are aligned in such a
way that one transaction is executed first. When the first transaction completes its
cycle, then the next transaction is executed. Transactions are ordered one after
the other. This type of schedule is called a serial schedule, as transactions are
executed in a serial manner.
In a multi-transaction environment, serial schedules are considered as a benchmark.
The execution sequence of an instruction in a transaction cannot be changed, but two
transactions can have their instructions executed in a random fashion. This execution
does no harm if two transactions are mutually independent and working on different
segments of data; but in case these two transactions are working on the same data,
then the results may vary. This ever-varying result may bring the database to an
inconsistent state.
To resolve this problem, we allow parallel execution of a transaction schedule, if its
transactions are either serializable or have some equivalence relation among them.

Equivalence Schedules
An equivalence schedule can be of the following types −

Result Equivalence
If two schedules produce the same result after execution, they are said to be result
equivalent. They may yield the same result for some value and different results for
another set of values. That's why this equivalence is not generally considered
significant.

View Equivalence
Two schedules would be view equivalence if the transactions in both the schedules
perform similar actions in a similar manner.
For example −
 If T reads the initial data in S1, then it also reads the initial data in S2.
 If T reads the value written by J in S1, then it also reads the value written by J in
S2.
 If T performs the final write on the data value in S1, then it also performs the final
write on the data value in S2.
Conflict Equivalence
Two schedules would be conflicting if they have the following properties −
 Both belong to separate transactions.
 Both accesses the same data item.
 At least one of them is "write" operation.
Two schedules having multiple transactions with conflicting operations are said to be
conflict equivalent if and only if −
 Both the schedules contain the same set of Transactions.
 The order of conflicting pairs of operation is maintained in both the schedules.
Note − View equivalent schedules are view serializable and conflict equivalent
schedules are conflict serializable. All conflict serializable schedules are view
serializable too.

States of Transactions
A transaction in a database can be in one of the following states −

 Active − In this state, the transaction is being executed. This is the initial state of
every transaction.
 Partially Committed − When a transaction executes its final operation, it is said
to be in a partially committed state.
 Failed − A transaction is said to be in a failed state if any of the checks made by
the database recovery system fails. A failed transaction can no longer proceed
further.
 Aborted − If any of the checks fails and the transaction has reached a failed
state, then the recovery manager rolls back all its write operations on the
database to bring the database back to its original state where it was prior to the
execution of the transaction. Transactions in this state are called aborted. The
database recovery module can select one of the two operations after a
transaction aborts −
o Re-start the transaction
o Kill the transaction
 Committed − If a transaction executes all its operations successfully, it is said to
be committed. All its effects are now permanently established on the database
system.
In a multiprogramming environment where multiple transactions can be executed
simultaneously, it is highly important to control the concurrency of transactions. We
have concurrency control protocols to ensure atomicity, isolation, and serializability of
concurrent transactions. Concurrency control protocols can be broadly divided into two
categories −
 Lock based protocols
 Time stamp based protocols
Lock-based Protocols
Database systems equipped with lock-based protocols use a mechanism by which any
transaction cannot read or write data until it acquires an appropriate lock on it. Locks
are of two kinds −
 Binary Locks − A lock on a data item can be in two states; it is either locked or
unlocked.
 Shared/exclusive − This type of locking mechanism differentiates the locks
based on their uses. If a lock is acquired on a data item to perform a write
operation, it is an exclusive lock. Allowing more than one transaction to write on
the same data item would lead the database into an inconsistent state. Read
locks are shared because no data value is being changed.
There are four types of lock protocols available −

Simplistic Lock Protocol


Simplistic lock-based protocols allow transactions to obtain a lock on every object
before a 'write' operation is performed. Transactions may unlock the data item after
completing the ‘write’ operation.

Pre-claiming Lock Protocol


Pre-claiming protocols evaluate their operations and create a list of data items on which
they need locks. Before initiating an execution, the transaction requests the system for
all the locks it needs beforehand. If all the locks are granted, the transaction executes
and releases all the locks when all its operations are over. If all the locks are not
granted, the transaction rolls back and waits until all the locks are granted.

Two-Phase Locking 2PL


This locking protocol divides the execution phase of a transaction into three parts. In the
first part, when the transaction starts executing, it seeks permission for the locks it
requires. The second part is where the transaction acquires all the locks. As soon as the
transaction releases its first lock, the third phase starts. In this phase, the transaction
cannot demand any new locks; it only releases the acquired locks.

Two-phase locking has two phases, one is growing, where all the locks are being
acquired by the transaction; and the second phase is shrinking, where the locks held by
the transaction are being released.
To claim an exclusive (write) lock, a transaction must first acquire a shared (read) lock
and then upgrade it to an exclusive lock.

Strict Two-Phase Locking


The first phase of Strict-2PL is same as 2PL. After acquiring all the locks in the first
phase, the transaction continues to execute normally. But in contrast to 2PL, Strict-2PL
does not release a lock after using it. Strict-2PL holds all the locks until the commit point
and releases all the locks at a time.

Strict-2PL does not have cascading abort as 2PL does.

Timestamp-based Protocols
The most commonly used concurrency protocol is the timestamp based protocol. This
protocol uses either system time or logical counter as a timestamp.
Lock-based protocols manage the order between the conflicting pairs among
transactions at the time of execution, whereas timestamp-based protocols start working
as soon as a transaction is created.
Every transaction has a timestamp associated with it, and the ordering is determined by
the age of the transaction. A transaction created at 0002 clock time would be older than
all other transactions that come after it. For example, any transaction 'y' entering the
system at 0004 is two seconds younger and the priority would be given to the older one.
In addition, every data item is given the latest read and write-timestamp. This lets the
system know when the last ‘read and write’ operation was performed on the data item.

Timestamp Ordering Protocol


The timestamp-ordering protocol ensures serializability among transactions in their
conflicting read and write operations. This is the responsibility of the protocol system
that the conflicting pair of tasks should be executed according to the timestamp values
of the transactions.
 The timestamp of transaction T is denoted as TS(T ).
i i

 Read time-stamp of data-item X is denoted by R-timestamp(X).


 Write time-stamp of data-item X is denoted by W-timestamp(X).
Timestamp ordering protocol works as follows −
 If a transaction Ti issues a read(X) operation −
o If TS(Ti) < W-timestamp(X)
 Operation rejected.
o If TS(Ti) >= W-timestamp(X)
 Operation executed.
o All data-item timestamps updated.
 If a transaction Ti issues a write(X) operation −
o If TS(Ti) < R-timestamp(X)
 Operation rejected.
o If TS(Ti) < W-timestamp(X)
 Operation rejected and Ti rolled back.
o Otherwise, operation executed.
Thomas' Write Rule
This rule states if TS(Ti) < W-timestamp(X), then the operation is rejected and T is i

rolled back.
Time-stamp ordering rules can be modified to make the schedule view serializable.
Instead of making T rolled back, the 'write' operation itself is ignored.
i

Loss of Volatile Storage


A volatile storage like RAM stores all the active logs, disk buffers, and related data. In
addition, it stores all the transactions that are being currently executed. What happens if
such a volatile storage crashes abruptly? It would obviously take away all the logs and
active copies of the database. It makes recovery almost impossible, as everything that
is required to recover the data is lost.
Following techniques may be adopted in case of loss of volatile storage −
 We can have checkpoints at multiple stages so as to save the contents of the
database periodically.
 A state of active database in the volatile memory can be
periodically dumped onto a stable storage, which may also contain logs and
active transactions and buffer blocks.
 <dump> can be marked on a log file, whenever the database contents are
dumped from a non-volatile memory to a stable one.
Recovery
 When the system recovers from a failure, it can restore the latest dump.
 It can maintain a redo-list and an undo-list as checkpoints.
 It can recover the system by consulting undo-redo lists to restore the state of all
transactions up to the last checkpoint.
Database Backup & Recovery from Catastrophic Failure
A catastrophic failure is one where a stable, secondary storage device gets corrupt.
With the storage device, all the valuable data that is stored inside is lost. We have two
different strategies to recover data from such a catastrophic failure −
 Remote backup &minu; Here a backup copy of the database is stored at a
remote location from where it can be restored in case of a catastrophe.
 Alternatively, database backups can be taken on magnetic tapes and stored at a
safer place. This backup can later be transferred onto a freshly installed database
to bring it to the point of backup.
Grown-up databases are too bulky to be frequently backed up. In such cases, we have
techniques where we can restore a database just by looking at its logs. So, all that we
need to do here is to take a backup of all the logs at frequent intervals of time. The
database can be backed up once a week, and the logs being very small can be backed
up every day or as frequently as possible.

Remote Backup
Remote backup provides a sense of security in case the primary location where the
database is located gets destroyed. Remote backup can be offline or real-time or online.
In case it is offline, it is maintained manually.

Crash Recovery
DBMS is a highly complex system with hundreds of transactions being executed every
second. The durability and robustness of a DBMS depends on its complex architecture
and its underlying hardware and system software. If it fails or crashes amid
transactions, it is expected that the system would follow some sort of algorithm or
techniques to recover lost data.

Failure Classification
To see where the problem has occurred, we generalize a failure into various categories,
as follows −

Transaction failure
A transaction has to abort when it fails to execute or when it reaches a point from where
it can’t go any further. This is called transaction failure where only a few transactions or
processes are hurt.
Reasons for a transaction failure could be −
 Logical errors − Where a transaction cannot complete because it has some
code error or any internal error condition.
 System errors − Where the database system itself terminates an active
transaction because the DBMS is not able to execute it, or it has to stop because
of some system condition. For example, in case of deadlock or resource
unavailability, the system aborts an active transaction.
System Crash
There are problems − external to the system − that may cause the system to stop
abruptly and cause the system to crash. For example, interruptions in power supply may
cause the failure of underlying hardware or software failure.
Examples may include operating system errors.
Disk Failure
In early days of technology evolution, it was a common problem where hard-disk drives
or storage drives used to fail frequently.
Disk failures include formation of bad sectors, unreachability to the disk, disk head
crash or any other failure, which destroys all or a part of disk storage.

Storage Structure
We have already described the storage system. In brief, the storage structure can be
divided into two categories −
 Volatile storage − As the name suggests, a volatile storage cannot survive
system crashes. Volatile storage devices are placed very close to the CPU;
normally they are embedded onto the chipset itself. For example, main memory
and cache memory are examples of volatile storage. They are fast but can store
only a small amount of information.
 Non-volatile storage − These memories are made to survive system crashes.
They are huge in data storage capacity, but slower in accessibility. Examples
may include hard-disks, magnetic tapes, flash memory, and non-volatile (battery
backed up) RAM.
Recovery and Atomicity
When a system crashes, it may have several transactions being executed and various
files opened for them to modify the data items. Transactions are made of various
operations, which are atomic in nature. But according to ACID properties of DBMS,
atomicity of transactions as a whole must be maintained, that is, either all the operations
are executed or none.
When a DBMS recovers from a crash, it should maintain the following −
 It should check the states of all the transactions, which were being executed.
 A transaction may be in the middle of some operation; the DBMS must ensure
the atomicity of the transaction in this case.
 It should check whether the transaction can be completed now or it needs to be
rolled back.
 No transactions would be allowed to leave the DBMS in an inconsistent state.
There are two types of techniques, which can help a DBMS in recovering as well as
maintaining the atomicity of a transaction −
 Maintaining the logs of each transaction, and writing them onto some stable
storage before actually modifying the database.
 Maintaining shadow paging, where the changes are done on a volatile memory,
and later, the actual database is updated.
Log-based Recovery
Log is a sequence of records, which maintains the records of actions performed by a
transaction. It is important that the logs are written prior to the actual modification and
stored on a stable storage media, which is failsafe.
Log-based recovery works as follows −
 The log file is kept on a stable storage media.
 When a transaction enters the system and starts execution, it writes a log about
it.
<T ,
n Start>
 When the transaction modifies an item X, it write logs as follows −
<T ,
n X, V , V >
1 2

It reads T has changed the value of X, from V to V .


n 1 2

 When the transaction finishes, it logs −


<T , commit>
n

The database can be modified using two approaches −


 Deferred database modification − All logs are written on to the stable storage
and the database is updated when a transaction commits.
 Immediate database modification − Each log follows an actual database
modification. That is, the database is modified immediately after every operation.
Recovery with Concurrent Transactions
When more than one transaction are being executed in parallel, the logs are
interleaved. At the time of recovery, it would become hard for the recovery system to
backtrack all logs, and then start recovering. To ease this situation, most modern DBMS
use the concept of 'checkpoints'.

Checkpoint
Keeping and maintaining logs in real time and in real environment may fill out all the
memory space available in the system. As time passes, the log file may grow too big to
be handled at all. Checkpoint is a mechanism where all the previous logs are removed
from the system and stored permanently in a storage disk. Checkpoint declares a point
before which the DBMS was in consistent state, and all the transactions were
committed.
Recovery
When a system with concurrent transactions crashes and recovers, it behaves in the
following manner −

 The recovery system reads the logs backwards from the end to the last
checkpoint.
 It maintains two lists, an undo-list and a redo-list.
 If the recovery system sees a log with <T , Start> and <T , Commit> or just <T ,
n n n

Commit>, it puts the transaction in the redo-list.


 If the recovery system sees a log with <T , Start> but no commit or abort log
n

found, it puts the transaction in undo-list.


All the transactions in the undo-list are then undone and their logs are removed. All the
transactions in the redo-list and their previous logs are removed and then redone before
saving their logs.
Online backup systems are more real-time and lifesavers for database administrators
and investors. An online backup system is a mechanism where every bit of the real-time
data is backed up simultaneously at two distant places. One of them is directly
connected to the system and the other one is kept at a remote place as backup.
As soon as the primary database storage fails, the backup system senses the failure
and switches the user system to the remote storage. Sometimes this is so instant that
the users can’t even realize a failure.
DBMS APPLICATIONS
There are different fields where a database management system is utilized.
Following are a few applications which utilize the information base
administration framework –
1. Railway Reservation System –

In the rail route reservation framework, the information base is needed to


store the record or information of ticket appointments, status about train’s
appearance, and flight. Additionally, if trains get late, individuals become
acquainted with it through the information base update.

2. Library Management System –


There are lots of books in the library so; it is difficult to store the record of the
relative multitude of books in a register or duplicate. Along these lines, the
data set administration framework (DBMS) is utilized to keep up all the data
identified with the name of the book, issue date, accessibility of the book,
and its writer.

3. Banking –
Database the executive’s framework is utilized to store the exchange data of
the client in the information base.

4. Education Sector –
Presently, assessments are led online by numerous schools and colleges.
They deal with all assessment information through the data set
administration framework (DBMS). In spite of that understudy’s enlistments
subtleties, grades, courses, expense, participation, results, and so forth all
the data is put away in the information base.

5. Credit card exchanges –


The database Management framework is utilized for buying on charge cards
and age of month to month proclamations.

6. Social Media Sites –


We all utilization of online media sites to associate with companions and to
impart our perspectives to the world. Every day, many people group pursue
these online media accounts like Pinterest, Facebook, Twitter, and Google in
addition to. By the utilization of the data set administration framework, all the
data of clients are put away in the information base and, we become ready
to interface with others.

7. Broadcast communications –
Without DBMS any media transmission organization can’t think. The
Database the executive’s framework is fundamental for these organizations
to store the call subtleties and month to month postpaid bills in the
information base.

8. Account –
The information base administration framework is utilized for putting away
data about deals, holding and acquisition of monetary instruments, for
example, stocks and bonds in a data set.
9. Online Shopping –
These days, web-based shopping has become a major pattern. Nobody
needs to visit the shop and burn through their time. Everybody needs to
shop through web based shopping sites, (for example, Amazon, Flipkart,
Snapdeal) from home. So all the items are sold and added uniquely with the
assistance of the information base administration framework (DBMS).
Receipt charges, installments, buy data these are finished with the
assistance of DBMS.

10. Human Resource Management –


Big firms or organizations have numerous specialists or representatives
working under them. They store data about worker’s compensation,
assessment, and work with the assistance of an information base
administration framework (DBMS).

11. Manufacturing –
Manufacturing organizations make various kinds of items and deal them
consistently. To keep the data about their items like bills, acquisition of the
item, amount, inventory network the executives, information base
administration framework (DBMS) is utilized.

12. Airline Reservation System –


This framework is equivalent to the railroad reservation framework. This
framework additionally utilizes an information base administration framework
to store the records of flight takeoff, appearance, and defer status.

You might also like