0% found this document useful (0 votes)
40 views

Key For II Mid DBMS B

Uploaded by

jyothibellaryv
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views

Key For II Mid DBMS B

Uploaded by

jyothibellaryv
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

B

ADITYA COLLEGE OF ENGINEERING


PUNGANUR ROAD, MADANAPALLE-517325
II-B.Tech(R15)-I-Semester II-Internal Examinations November -2016 (Descriptive)
15A05301- Database Management Systems (Computer Science Engineering)
Time : 90 min
Max Marks : 30
__________________________________________________________________________________

PART A
1 Answer all the questions each question carries 2 Marks

5*2=10

a) Define Transaction and describe ACID properties of a transaction.


Collections of operations that form a single logical unit of work are called Transactions. A database system
must ensure proper execution of transactions despite failures either the entire transaction executes, or none of
it does. A transaction is a unit of program execution that accesses and possibly updates various data items. To
ensure integrity of the data, we require that the database system maintain the following properties of the
transaction.
Atomicity: Either all operations of the transaction are reflected properly in the database, or non are .
Consistency : Execution of a transaction in isolation ( that is, with no other transaction executing concurrently)
preserves the consistency of the database.
Isolation : Even though multiple transactions may execute concurrently, the system guarantees that, for every
pair of transaction Ti and Tj, ti appears to Ti that either Tj finished execution before Ti started, or Tj started
execution after Ty finished. Thus, each transaction is unaware of other transactions executing concurrently in
the system.
Durability : After a transaction completes successfully, the changes it has made to the database persist, even if
there are system failures.
These properties are often called the ACID properties, the acronym is derived from the first letter of each of the
four properties.

b) Define BCNF and super key.


Super Key
The set of all attributes that can be determined using the given set of attributes is called attribute closure and is
denoted by A+. If A+ contains all the attributes of a relation then A is called the super key.
Ex. R(ABC) with FD(A B and B C) then A+ ={A,B,C} and hence A is a superkey.
BCNF
Let R be a relation schema, X be a subset of the attributes of R, and let A be an attribute of R. R is in BoyceCodd normal form if for every FD X A that holds over R, one of the following statements is true:
A belongs to X; that is, it is a trivial FD, or
X is a superkey.

c) How to recover non volatile storage data loss.


The basic scheme is to dump the entire content of the database to stable storage periodicallysay, once
per day. For example, we may dump the database to one or more magnetic tapes. If a failure occurs that results
in the loss of physical database blocks, the system uses the most recent dump in restoring the database to a
previous consistent state. Once this restoration has been accomplished, the system uses the log to bring the
database system to the most recent consistent state. More precisely, no transaction may be active during the
dump procedure, and a procedure similar to checkpointing must take place:
1. Output all log records currently residing in main memory onto stable storage.
2. Output all buffer blocks onto the disk.
3. Copy the contents of the database to stable storage.
4. Output a log record <dump> onto the stable storage.
Steps 1, 2, and 4 correspond to the three steps used for checkpoints.To recover from the loss of nonvolatile
storage, the system restores the database to disk by using the most recent dump. Then, it consults the log and
redoes all the transactions that have committed since the most recent dump occurred. Notice that no undo
operations need to be executed. A dump of the database contents is also referred to as an archival dump, since
we can archive the dumps and use them later to examine old states of the database. Dumps of a database and
checkpointing of buffers are similar. The simple dump procedure described here is costly for the following two
reasons. First, the entire database must be be copied to stable storage, resulting in considerable data transfer.
Second, since transaction processing is halted during the dump procedure,CPU cycles are wasted. Fuzzy dump
schemes have been developed, which allow transactions to be active while the dump is in progress. Describe
Lost update problem and Unrepeatable read Problem.

B
d) Describe Lost update problem and Unrepeatable read Problem.
Lost Update Problem (W-W Conflict)
Suppose that the operations of T1 and T2 are interleaved in such a way that T2 reads a value of a data item
before that was updated by T1 and now when T2 updates the value of that data item in the database the value of
the data item that was written by T1 was overwritten by the value written by T2 and hence it is lost. This is
known as lost update problem.

Unrepeateable read problem(R-W conflict)


When the trans tries to read the value of a data item twice and another transaction updates the same data item
in between the two read operations of the first transaction, as a result the first transaction reads varied values of
same data item during its execution. This is known as unrepeatable reads.

e) Define what is hashing and explain various types of hashing.


Hashing is an effective technique to calculate the direct location of a data record on the disk without using
index structure. Hashing uses hash functions with search keys as parameters to generate the address of a data
record.
Hash Organization
Bucket A hash file stores data in bucket format. Bucket is considered a unit of storage. A bucket typically
stores one complete disk block, which in turn can store one or more records.
Hash Function A hash function, h, is a mapping function that maps all the set of search-keys K to the address
where actual records are placed. It is a function from search keys to bucket addresses.
Static Hashing
In static hashing, when a search-key value is provided, the hash function always computes the same
address. For example, if mod-4 hash function is used, then it shall generate only 5 values. The output address
shall always be same for that function. The number of buckets provided remains unchanged at all times.
Operations in static Hashing:

B
Insertion When a record is required to be entered using static hash, the hash function h computes the
bucket address for search key K, where the record will be stored. Bucket address = h(K)
Search When a record needs to be retrieved, the same hash function can be used to retrieve the address
of the bucket where the data is stored.
Delete This is simply a search followed by a deletion operation.
Bucket Overflow
The condition of bucket-overflow is known as collision. This is a fatal state for any static hash function.
In this case, overflow chaining can be used.
Overflow Chaining When buckets are full, a new bucket is allocated for the same hash result and is linked
after the previous one. This mechanism is called Closed Hashing.
Linear Probing When a hash function generates an address at which data is already stored, the next free
bucket is allocated to it. This mechanism is called Open Hashing.
Dynamic Hashing
The problem with static hashing is that it does not expand or shrink dynamically as the size of the database
grows or shrinks. Dynamic hashing provides a mechanism in which data buckets are added and removed
dynamically and on-demand. Dynamic hashing is also known as extended hashing. Hash function, in dynamic
hashing, is made to produce a large number of values and only a few are used initially.

PART B
2) a) Describe about testing of Serializability.
A simple and efficient method for determining conflict serializability of a schedule. Consider a
schedule S. We construct a directed graph, called a precedence graph, from S. This graph consists of a
pair G = (V, E), where V is a set of vertices and E is a set of edges. The set of vertices consists of all
the transactions participating in the schedule. The set of edges consists of all edges Ti Tj for which
one of three conditions holds:
1. Ti executes write(Q) before Tj executes read(Q).
2. Ti executes read(Q) before Tj executes write(Q).
3. Ti executes write(Q) before Tj executes write(Q).
If an edge Ti Tj exists in the precedence graph, then, in any serial schedule S 1equivalent to S, Ti
must appear before Tj .
If the precedence graph for S has a cycle, then schedule S is not conflict serializable. If the graph
contains no cycles, then the schedule S is conflict serializable. A serializability order of the
transactions can be obtained through topological sorting, which determines a linear order consistent
with the partial order of the precedence graph. There are, in general, several possible linear orders that
can be obtained through a topological sorting. Thus, to test for conflict serializability, we need to
construct the precedence graph
and to invoke a cycle-detection algorithm. Cycle-detection algorithms, such as those based on depthfirst search, require on the order of n 2 operations, where n is the number of vertices in the graph (that
is, the number of transactions).
Testing for view serializability is complicated. The problem of testing for view serializability is itself
NP-complete. Thus, almost certainly there exists no efficient algorithm to test for view serializability.
However, concurrency-control schemes can still use sufficient conditions for view serializability. That
is, if the sufficient conditions are satisfied, the schedule is view serializable, but there may be viewserializable schedules that do not satisfy the sufficient conditions.

b) Write a short note on Checkpoint.


Keeping and maintaining logs in real time and in real environment may fill out all the memory space
available in the system. As time passes, the log file may grow too big to be handled at all. Checkpoint
is a mechanism where all the previous logs are removed from the system and stored permanently in a
storage disk. Checkpoint declares a point before which the DBMS was in consistent state, and all the
transactions were committed.
When a system failure occurs, we must consult the log to determine those transactions that need to be
redone and those that need to be undone. Rather than reprocessing the entire log, which is timeconsuming and much of it unnecessary, we can use checkpoints:

Output onto stable storage all the log records currently residing in main memory.

Output to the disk all modified buffer blocks.

Output onto stable storage a log record, <checkpoint>.

Now recovery will be to only process log records since the last crash.
(OR)

3) Write short note on


a) Recoverable Schedules.
Recoverable Schedule:
A schedule in which for each pair of transactions Ti and Tj such that Tj reads a data item that was previously
written by Ti then the commit operation of Ti should appear before the commit operation of Tj.

b) Graph based Protocol.


To implement graph based protocol we need additional information on how each transaction will
access the database. The simplest model requires that we have prior knowledge about the order in which the
database items will be accessed. To acquire such prior knowledge we impose partial ordering on set of all data
items that is if di to dj is an ordering any transaction which requires dj must require to access di before dj.
The partial ordering is represented as a directed acyclic graph called database graph. The simplest
protocol of graph based protocol is Tree Protocol which requires to employ only exclusive lock modes.

In the tree protocol, the only lock instruction allowed is lock-X. Each transaction
Ti can lock a data item at most once, and must observe the following rules:
1. The first lock by Ti may be on any data item.
2. Subsequently, a data item Q can be locked by Ti only if the parent of Q is
currently locked by Ti.
3. Data items may be unlocked at any time.
4. A data item that has been locked and unlocked by Ti cannot subsequently be
relocked by Ti.
All schedules that are legal under the tree protocol are conflict serializable.

B
The tree-locking protocol has an advantage over the two-phase locking protocol in that, unlike twophase locking, it is deadlock-free, so no rollbacks are required. The tree-locking protocol has another
advantage over the two-phase locking protocol in that unlocking may occur earlier. Earlier unlocking
may lead to shorter waiting times, and to an increase in concurrency. However, the protocol has the
disadvantage that, in some cases, a transaction may have to lock data items that it does not access.
For a set of transactions, there may be conflict-serializable schedules that cannot be obtained through
the tree protocol. Indeed, there are schedules possible under the two-phase locking protocol that are
not possible under the tree protocol, and vice versa.

c) ARIES.
ARIES is a recovery algorithm that is designed to work with a steal, no-force approach. When the
recovery manager is invoked after a crash, restart proceeds in three phases:
1.Analysis: Identifies dirty pages in the buffer pool and active transactions at the time of the crash.
2.Redo: Repeats all actions, starting from an appropriate point in the log, and restores the database state to what
it was at the time of the crash.
3.Undo: Undoes the actions of transactions that did not commit, so that the database reflects only the actions of
committed transactions.
There are three main principles behind the ARIES recovery algorithm:
Write-ahead logging: Any change to a database object is first recorded in the log; the record in the log must be
written to stable storage before the change to the database object is written to disk.
Repeating history during Redo: Upon restart following a crash, ARIES retraces all actions of the DBMS
before the crash and brings the system back to the exact state that it was in at the time of the crash. Then, it
undoes the actions of transactions that were still active at the time of the crash.
Logging changes during Undo: Changes made to the database while undoing a transaction are logged in order
to ensure that such an action is not repeated in the event of repeated restarts.

e. Logical Undo Logging


For operations where locks are released early, we cannot perform the undo actions by simply writing
back the old value of the data items. Consider a transaction T that inserts an entry into a B+-tree, and,
following the B+-tree concurrency-control protocol, releases some locks after the insertion operation
completes, but before the transaction commits. After the locks are released, other transactions may
perform further insertions or deletions, thereby causing further changes to the B+-tree nodes.
Even though the operation releases some locks early, it must retain enough locks to ensure that no
other transaction is allowed to execute any conflicting operation (such as reading the inserted value or
deleting the inserted value) holds locks on the leaf level of the B+-tree until the end of the
transaction.
The insertion operation has to be undone by a logical undothat is, in this case, by the execution of a
delete operation. Therefore, when the insertion operation completes, before it releases any locks, it
writes a log record <Ti,Oj , operation-end, U>, where the U denotes undo information and Oj denotes
a unique identifier for (the instance of) the operation. The insertion and deletion operations are
examples of a class of operations that require logical undo operations since they release locks early;
we call such operations logical operations. Before a logical operation begins, it writes a log record
<Ti,Oj , operation-begin>, where Oj is the unique identifier for the operation. While the system is
executing the operation, it does physical logging in the normal fashion for all updates performed by
the operation. Thus, the usual old-value and new-value information is written out for each update.
When the operation finishes, it writes an operation-end log record.

4) a) Explain concurrency control techniques 2PL and Timestamp based


protocols.
A DBMS must be able to ensure that only serializable, recoverable schedules are allowed, and that no
actions of committed transactions are lost while undoing aborted transactions. A DBMS typically uses
a locking protocol to achieve this. A locking protocol is a set of rules to be followed by each
transaction, in order to ensure that even though actions of several transactions might be interleaved,
the net effect is identical to executing all transactions in some serial order.

B
Strict Two-Phase Locking(Strict2PL):
The most widely used locking protocol, called Strict Two-Phase Locking, or Strict2PL,
has two rules. The first rule is
1.If a transaction T wants to read an object, it first requests a shared lock on the object.A transaction
that requests a lock is suspended until the DBMS is able to grant it the requested lock. The DBMS
keeps track of the locks it has granted and ensures that if a transaction holds an exclusive lock on an
object no other transaction holds a shared or exclusive lock on the same object.
(2)All locks held by a transaction are released when the transaction is completed.
Multiple-Granularity Locking
Another specialized locking strategy is called multiple-granularity locking, and it allows us to
efficiently set locks on objects that contain other objects. For instance, a database contains several files
, a file is a collection of pages , and a page is a collection of records . A transaction that expects to
access most of the pages in a file should probably set a lock on the entire file, rather than locking
individual pages as and when it needs them. Doing so reduces the locking overhead considerably. On
the other hand, other transactions that require access to parts of the file even parts that are not
needed by this transaction are blocked. If a transaction accesses relatively few pages of the file, it is
better to lock only those pages. Similarly, if a transaction accesses ever all records on a page, it should
lock the entire page, and if it accesses just a few records, it should lock just those records.
The recovery manager of a DBMS is responsible for ensuring two important properties of transactions:
atomicity and durability. It ensures atomicity by undoing the actions of transactions that do not commit
and durability by making sure that all actions of committed transactions survive system crashes, (e.g.,
a core dump caused by a bus error) and media failures (e.g., a disk is corrupted).
Timestamp Ordering Protocol
The timestamp-ordering protocol ensures serializability among transactions in their conflicting read and write
operations. This is the responsibility of the protocol system that the conflicting pair of tasks should be executed
according to the timestamp values of the transactions.

The timestamp of transaction Ti is denoted as TS(Ti).

Read time-stamp of data-item X is denoted by R-timestamp(X).

Write time-stamp of data-item X is denoted by W-timestamp(X).


Timestamp ordering protocol works as follows
If a transaction Ti issues a read(X) operation
o

If TS(Ti) < W-timestamp(X)

Operation rejected.

If TS(Ti) >= W-timestamp(X)

Operation executed.

o
All data-item timestamps updated.
If a transaction Ti issues a write(X) operation
o

If TS(Ti) < R-timestamp(X)

Operation rejected.

If TS(Ti) < W-timestamp(X)

Operation rejected and Ti rolled back.

B
o
Otherwise, operation executed.
Thomas' Write Rule
This rule states if TS(Ti) < W-timestamp(X), then the operation is rejected and Ti is rolled back.
Time-stamp ordering rules can be modified to make the schedule view serializable.
Instead of making Ti rolled back, the 'write' operation itself is ignored.

b) Explain the process of remote backups.


Remote backup systems provide high availability by allowing transaction processing to continue even if the
primary site is destroyed. Remote backup can be offline or real-time or online. In case it is offline, it is
maintained manually. Online backup systems are more real-time and lifesavers for database administrators and
investors. An online backup system is a mechanism where every bit of the real-time data is backed up
simultaneously at two distant places. One of them is directly connected to the system and the other one is kept
at a remote place as backup.
As soon as the primary database storage fails, the backup system senses the failure and switches the user system
to the remote storage. Sometimes this is so instant that the users cant even realize a failure.

Detection of failure: Backup site must detect when primary site has failed
To distinguish primary site failure from link failure maintain several communication
links between the primary and the remote backup.
Heart-beat messages
Transfer of control:
o To take over control backup site first perform recovery using its copy of the database and all the
long records it has received from the primary.
o Thus, completed transactions are redone and incomplete transactions are rolled back.
o When the backup site takes over processing it becomes the new primary
o To transfer control back to old primary when it recovers, old primary must receive redo logs
from the old backup and apply all updates locally.
Time to recover: To reduce delay in takeover, backup site periodically proceses the redo log records (in
effect, performing recovery from previous database state), performs a checkpoint, and can then delete
earlier parts of the log.
Hot-Spare configuration permits very fast takeover:
o Backup continually processes redo log record as they arrive, applying the updates locally.
o When failure of the primary is detected the backup rolls back incomplete transactions, and is
ready to process new transactions.
Alternative to remote backup: distributed database with replicated data
o Remote backup is faster and cheaper, but less tolerant to failure
Ensure durability of updates by delaying transaction commit until update is logged at backup; avoid this
delay by permitting lower degrees of durability.
One-safe: commit as soon as transactions commit log record is written at primary
o Problem: updates may not arrive at backup before it takes over.
Two-very-safe: commit when transactions commit log record is written at primary and backup
o Reduces availability since transactions cannot commit if either site fails.
Two-safe: proceed as in two-very-safe if both primary and backup are active. If only the primary is
active, the transaction commits as soon as is commit log record is written at the primary.
o Better availability than two-very-safe; avoids problem of lost transactions in one-safe.
(OR)

5) a) What is an Index? Elaborate about ISAM and B+-Tree Structures.

B
Indexing is a data structure technique to efficiently retrieve records from the database files based on some
attributes on which the indexing has been done. Indexing in database systems is similar to what we see in
books.
INDEXED SEQUENTIAL ACCESS METHOD (ISAM)
The potential large size of the index file motivates the ISAM idea. Building an auxiliary file on the index file
and so on recursively until the final auxiliary file fits on one page. This repeated construction of a one-level
index leads to a tree structure that is illustrated in Figure. The data entries of the ISAM index are in the leaf
pages of the tree and additional overflow pages that are chained to some leaf page. In addition, some systems
carefully organize the layout of pages so that page boundaries correspond closely to the physical characteristics
of the underlying storage device. The ISAM structure is completely static and facilitates such low-level
optimizations.

Fig ISAM Index Structure


Each tree node is a disk page, and all the data resides in the leaf pages. This corresponds to an index that uses
Alternative (1) for data entries, we can create an index with Alternative (2) by storing the data records in a
separate file and storing key, rid pairs in the leaf pages of the ISAM index. When the file is created, all leaf
pages are allocated sequentially and sorted on the search key value.The non-leaf level pages are then allocated.
If there are several inserts to the file subsequently, so that more entries are inserted into a leaf than will fit onto a
single page, additional pages are needed because the index structure is static. These additional pages are
allocated from an overflow area. The allocation of pages is illustrated in below Figure.

Fig: Page allocation in ISAM

B+ tree
A static structure such as the ISAM index suffers from the problem that long overflow chains can develop as
the file grows, leading to poor performance. This problem motivated the development of more flexible, dynamic
structures that adjust gracefully to inserts and deletes. The B+ tree search structure, which is widely used, is a
balanced tree in which the internal nodes direct the search and the leaf nodes contain the data entries. Since the
tree structure grows and shrinks dynamically, it is not feasible to allocate the leaf pages sequentially as in
ISAM, where the set of primary leaf pages was static. In order to retrieve all leaf pages efficiently, we have to
link them using page pointers. By organizing them into a doubly linked list, we can easily traverse the sequence
of leaf pages in either direction. This structure is illustrated in Figure.

The following are some of the main characteristics of a B+ tree:


Operations (insert, delete) on the tree keep it balanced.However, deletion is often implemented by simply
locating the data entry and removing it, without adjusting the tree as needed to guarantee the 50 percent
occupancy, because files typically grow rather than shrink. Searching for a record requires just a traversal from
the root to the appropriate leaf. We will refer to the length of a path from the root to a leafany leaf, because
the tree is balancedas the height of the tree.

b) Describe the various types of record organizations.


In addition to storing individual records, information that is common to all records of a given
record type (such as the number of fields and field types) is stored in the system catalog, which can be
thought of as a description of the contents of a database, maintained by the DBMS. This avoids
repeated storage of the same information with each record of a given type. Records can be of fixed
length or variable length.
Fixed Length Records:
For fixed-length records, the fields can be stored consecutively and the address of a field can be
easily calculated. In a fixed length record, each field has a fixed length (that I s the value this field is
of the same length in all records), and the number of fields is also fixed. The fields of such a record
can be stored consecutively, and, given the address of the record, the address of a particular field can
be calculated using information about the lengths of preceding fields, which is available in the system
catalog.

Variable Length Records:


Variable-length records can be stored with an array of off sets at the beginning of the record or the
individual can be fields separated by a delimiter symbol. The organization with an array of offsets
offers direct access to fields (which can be important if records are long and contain many fields) and
support for null values.
Variable length record formats can obviously be used to store fixed length records as well; sometimes,
the extra overhead is justified by the added flexibility, because issues such as supporting null values
and adding fields to a record type arise with fixed-length records as well. Having variable-length fi
elds in a record can raise some subtle issues, especially when a record is modified.

ADITYA COLLEGE OF ENGINEERING


PUNGANUR ROAD, MADANAPALLE-517325
II-B.Tech(R15)-I-Semester II-Internal Examinations November -2016 (Objective)
15A05301- Database Management Systems (Computer Science Engineering)
Time :20 min
Max Marks : 10
Name :

Roll No:

__________________________________________________________________________________
I Answer all the questions each question carries 1 Mark
5*1=5
1) What are FD and Transitive Dependencies?
Functional dependency (FD) is a set of constraints between two attributes in a relation. Functional
dependency says that if two tuples have same values for attributes A1, A2,..., An, then those two tuples
must have to have same values for attributes B1, B2, ..., Bn.
Functional dependency is represented by an arrow sign () that is, XY, where X functionally determines
Y. The left-hand side attributes determine the values of attributes on the right-hand side.
Transitive Dependency
A transitive dependency is a functional dependency which holds by virtue of transitivity. A transitive
dependency can occur only in a relation that has three or more attributes. Let A, B, and C designate three

B
distinct attributes (or distinct collections of attributes) in the relation.A -> C is a transitive dependency
when it is true only because both A -> B and B -> C are true.
2) What is a clustered index?
A clustered index is a special type of index that reorders the way records in the table are physically stored.
Therefore table can have only one clustered index. The leaf nodes of a clustered index contain the data
pages.
3) Define lock and list various types of lock modes in multi granularity locking.
A lock is a variable associated with a data item that describes the status of the item with respect to possible
operations that can be applied to it.

4) What is Multi version 2PL.


The multiversion two-phase locking protocol attempts to combine th advantages of multiversion
concurrency control with the advantages of two-phase locking. This protocol differentiates between readonly transactions and update transactions. Update transactions perform rigorous two-phase locking; that is,
they hold all locks up to the end of the transaction. Thus, they can be serialized according to their
commit order.
5) What is a checkpoint.

Keeping and maintaining logs in real time and in real environment may fill out all the memory
space available in the system. As time passes, the log file may grow too big to be handled at all.
Checkpoint is a mechanism where all the previous logs are removed from the system and stored
permanently in a storage disk. Checkpoint declares a point before which the DBMS was in
consistent state, and all the transactions were committed.

II Answer all the questions each question carries mark

5*1/2=2

1
2

Fill in the blanks with appropriate answers given below.


(Analysis, candidate key, BCNF, Precedence Graph, 2PL)
6. ARIES first pass is Analysis
7. Minimal super key is called Candidate Key
8. Testing for serializability can be easily achieved with the help of Precedence Graph
9. Minimum required level of normalization for a table is BCNF
10. Example for lock based pessimistic concurrency control protocol is 2PL
1
2

III Answer all the questions each question carries mark


5*1/2=2
11.Deadlocks are possible only when one of the transactions wants to obtain a(n) ____ lock on a data
item.
[ C ]
a. Binary
b. Exclusive c. Shared
d. Complete
12. A DBMS uses a transaction _ to keep track of all transactions that update the database [ A ]

B
a. log
b. table
c. block
13. Recovery is possible if we maintain a
a. Dirty Record
b. Log file
14. Example of dense index is
a. Ternary
b. Secondary
15. Join Dependency is removed in
a. 2NF
b. Denormalization

d. statement
c. W-W conflict
c. Primary
c. 3NF

d. 5NF

[ B ]
d. Tree
[ C
d. Clustered
[
D ]

You might also like