0% found this document useful (0 votes)
110 views

Static and Dynamic Hashing

The document discusses static and dynamic hashing techniques in database management systems, highlighting their importance for efficient data retrieval. Static hashing uses a fixed address generated by a hash function, while dynamic hashing (specifically extendible hashing) allows for the dynamic growth and shrinkage of data buckets as records change. Key concepts include data buckets, hash functions, and methods to handle bucket overflow, with advantages and limitations of each hashing approach outlined.

Uploaded by

Arundhathi
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
110 views

Static and Dynamic Hashing

The document discusses static and dynamic hashing techniques in database management systems, highlighting their importance for efficient data retrieval. Static hashing uses a fixed address generated by a hash function, while dynamic hashing (specifically extendible hashing) allows for the dynamic growth and shrinkage of data buckets as records change. Key concepts include data buckets, hash functions, and methods to handle bucket overflow, with advantages and limitations of each hashing approach outlined.

Uploaded by

Arundhathi
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

STATIC HASHING AND DYNAMIC HASHING

In database management system, When we want to retrieve a particular data, It


becomes very inefficient to search all the index values and reach the desired data. In
this situation, Hashing technique comes into picture.
Hashing is an efficient technique to directly search the location of desired data on the
disk without using index structure. Data is stored at the data blocks whose address is
generated by using hash function. The memory location where these records are stored
is called as data block or data bucket.

Hash File Organization :

 Data bucket – Data buckets are the memory locations where the records are stored.
These buckets are also considered as Unit Of Storage.
 Hash Function – Hash function is a mapping function that maps all the set of search keys
to actual record address. Generally, hash function uses primary key to generate the hash
index – address of the data block. Hash function can be simple mathematical function to
any complex mathematical function.
 Hash Index-The prefix of an entire hash value is taken as a hash index. Every hash index
has a depth value to signify how many bits are used for computing a hash function. These
bits can address 2n buckets. When all these bits are consumed ? then the depth value is
increased linearly and twice the buckets are allocated.
Below given diagram clearly depicts how hash function work:
Hashing is further divided into two sub categories :

Static Hashing –

In static hashing, when a search-key value is provided, the hash function always
computes the same address. For example, if we want to generate address for
STUDENT_ID = 76 using mod (5) hash function, it always result in the same bucket
address 4. There will not be any changes to the bucket address here. Hence number of
data buckets in the memory for this static hashing remains constant throughout.
Operations –
 Insertion – When a new record is inserted into the table, The hash function h generate a
bucket address for the new record based on its hash key K.
Bucket address = h(K)
 Searching – When a record needs to be searched, The same hash function is used to
retrieve the bucket address for the record. For Example, if we want to retrieve whole
record for ID 76, and if the hash function is mod (5) on that ID, the bucket address
generated would be 4. Then we will directly got to address 4 and retrieve the whole record
for ID 104. Here ID acts as a hash key.
 Deletion – If we want to delete a record, Using the hash function we will first fetch the
record which is supposed to be deleted. Then we will remove the records for that address
in memory.
 Updation – The data record that needs to be updated is first searched using hash
function, and then the data record is updated.
Now, If we want to insert some new records into the file But the data bucket address
generated by the hash function is not empty or the data already exists in that address.
This becomes a critical situation to handle. This situation in the static hashing is
called bucket overflow.
How will we insert data in this case?
There are several methods provided to overcome this situation. Some commonly used
methods are discussed below:
1. Open Hashing –
In Open hashing method, next available data block is used to enter the new record,
instead of overwriting older one. This method is also called linear probing.
For example, D3 is a new record which needs to be inserted , the hash function
generates address as 105. But it is already full. So the system searches next
available data bucket, 123 and assigns D3 to it.

2. Closed hashing –
In Closed hashing method, a new data bucket is allocated with same address and is linked
it after the full data bucket. This method is also known as overflow chaining.
For example, we have to insert a new record D3 into the tables. The static hash function
generates the data bucket address as 105. But this bucket is full to store the new data. In
this case is a new data bucket is added at the end of 105 data bucket and is linked to it.
Then new record D3 is inserted into the new bucket.
 Quadratic probing :
Quadratic probing is very much similar to open hashing or linear probing. Here, The
only difference between old and new bucket is linear. Quadratic function is used to
determine the new bucket address.
 Double Hashing :
Double Hashing is another method similar to linear probing. Here the difference is
fixed as in linear probing, but this fixed difference is calculated by using another hash
function. That’s why the name is double hashing.

Dynamic Hashing –

The drawback of static hashing is that that it does not expand or shrink dynamically as
the size of the database grows or shrinks. In Dynamic hashing, data buckets grows or
shrinks (added or removed dynamically) as the records increases or decreases.
Dynamic hashing is also known as extended hashing.

Extendible Hashing (Dynamic approach to DBMS)


Extendible Hashing is a dynamic hashing method wherein directories, and buckets are
used to hash data. It is an aggressively flexible method in which the hash function also
experiences dynamic changes.
Main features of Extendible Hashing: The main features in this hashing technique
are:
 Directories: The directories store addresses of the buckets in pointers. An id is
assigned to each directory which may change each time when Directory Expansion
takes place.
 Buckets: The buckets are used to hash the actual data.
Basic Structure of Extendible Hashing:
Frequently used terms in Extendible Hashing:
 Directories: These containers store pointers to buckets. Each directory is given a
unique id which may change each time when expansion takes place. The hash
function returns this directory id which is used to navigate to the appropriate
bucket. Number of Directories = 2^Global Depth.
 Buckets: They store the hashed keys. Directories point to buckets. A bucket may
contain more than one pointers to it if its local depth is less than the global depth.
 Global Depth: It is associated with the Directories. They denote the number of bits
which are used by the hash function to categorize the keys. Global Depth =
Number of bits in directory id.
 Local Depth: It is the same as that of Global Depth except for the fact that Local
Depth is associated with the buckets and not the directories. Local depth in
accordance with the global depth is used to decide the action that to be performed
in case an overflow occurs. Local Depth is always less than or equal to the Global
Depth.
 Bucket Splitting: When the number of elements in a bucket exceeds a particular
size, then the bucket is split into two parts.
 Directory Expansion: Directory Expansion Takes place when a bucket overflows.
Directory Expansion is performed when the local depth of the overflowing bucket is
equal to the global depth.
Basic Working of Extendible Hashing:

 Step 1 – Analyze Data Elements: Data elements may exist in various forms eg.
Integer, String, Float, etc.. Currently, let us consider data elements of type integer.
eg: 49.
 Step 2 – Convert into binary format: Convert the data element in Binary form.
For string elements, consider the ASCII equivalent integer of the starting character
and then convert the integer into binary form. Since we have 49 as our data
element, its binary form is 110001.
 Step 3 – Check Global Depth of the directory. Suppose the global depth of the
Hash-directory is 3.
 Step 4 – Identify the Directory: Consider the ‘Global-Depth’ number of LSBs in
the binary number and match it to the directory id.
Eg. The binary obtained is: 110001 and the global-depth is 3. So, the hash function
will return 3 LSBs of 110001 viz. 001.
 Step 5 – Navigation: Now, navigate to the bucket pointed by the directory with
directory-id 001.
 Step 6 – Insertion and Overflow Check: Insert the element and check if the
bucket overflows. If an overflow is encountered, go to step 7 followed by Step 8,
otherwise, go to step 9.
 Step 7 – Tackling Over Flow Condition during Data Insertion: Many times,
while inserting data in the buckets, it might happen that the Bucket overflows. In
such cases, we need to follow an appropriate procedure to avoid mishandling of
data.
First, Check if the local depth is less than or equal to the global depth. Then
choose one of the cases below.
 Case1: If the local depth of the overflowing Bucket is equal to the global
depth, then Directory Expansion, as well as Bucket Split, needs to be
performed. Then increment the global depth and the local depth value by 1.
And, assign appropriate pointers.
Directory expansion will double the number of directories present in the hash
structure.
 Case2: In case the local depth is less than the global depth, then only Bucket
Split takes place. Then increment only the local depth value by 1. And, assign
appropriate pointers.
 Step 8 – Rehashing of Split Bucket Elements: The Elements present in the
overflowing bucket that is split are rehashed w.r.t the new global depth of the
directory.
 Step 9 – The element is successfully hashed.
Example based on Extendible Hashing: Now, let us consider a prominent example of
hashing the following elements: 16,4,6,22,24,10,31,7,9,20,26.
Bucket Size: 3 (Assume)
Hash Function: Suppose the global depth is X. Then the Hash Function returns X
LSBs.
 Solution: First, calculate the binary forms of each of the given numbers.
16- 10000
4- 00100
6- 00110
22- 10110
24- 11000
10- 01010
31- 11111
7- 00111
9- 01001
20- 10100
26- 01101
 Initially, the global-depth and local-depth is always 1. Thus, the hashing frame
looks like this:

 Inserting 16:
The binary format of 16 is 10000 and global-depth is 1. The hash function returns 1
LSB of 10000 which is 0. Hence, 16 is mapped to the directory with id=0.

 Inserting 4 and 6:
Both 4(100) and 6(110)have 0 in their LSB. Hence, they are hashed as follows:

 Inserting 22: The binary form of 22 is 10110. Its LSB is 0. The bucket pointed by
directory 0 is already full. Hence, Over Flow occurs.
 As directed by Step 7-Case 1, Since Local Depth = Global Depth, the bucket splits
and directory expansion takes place. Also, rehashing of numbers present in the
overflowing bucket takes place after the split. And, since the global depth is
incremented by 1, now,the global depth is 2. Hence, 16,4,6,22 are now rehashed
w.r.t 2 LSBs.[ 16(10000),4(100),6(110),22(10110) ]

 *Notice that the bucket which was underflow has remained untouched. But, since
the number of directories has doubled, we now have 2 directories 01 and 11
pointing to the same bucket. This is because the local-depth of the bucket has
remained 1. And, any bucket having a local depth less than the global depth is
pointed-to by more than one directories.
 Inserting 24 and 10: 24(11000) and 10 (1010) can be hashed based on
directories with id 00 and 10. Here, we encounter no overflow condition.
 Inserting 31,7,9: All of these elements[ 31(11111), 7(111), 9(1001) ] have either
01 or 11 in their LSBs. Hence, they are mapped on the bucket pointed out by 01
and 11. We do not encounter any overflow condition here.

 Inserting 20: Insertion of data element 20 (10100) will again cause the overflow
problem.

 20 is inserted in bucket pointed out by 00. As directed by Step 7-Case 1, since


the local depth of the bucket = global-depth, directory expansion (doubling)
takes place along with bucket splitting. Elements present in overflowing bucket are
rehashed with the new global depth. Now, the new Hash table looks like this:

 Inserting 26: Global depth is 3. Hence, 3 LSBs of 26(11010) are considered.


Therefore 26 best fits in the bucket pointed out by directory 010.

 The bucket overflows, and, as directed by Step 7-Case 2, since the local depth of
bucket < Global depth (2<3), directories are not doubled but, only the bucket is
split and elements are rehashed.
Finally, the output of hashing the given list of numbers is obtained.

 Hashing of 11 Numbers is Thus Completed.


Key Observations:
1. A Bucket will have more than one pointers pointing to it if its local depth is less
than the global depth.
2. When overflow condition occurs in a bucket, all the entries in the bucket are
rehashed with a new local depth.
3. If Local Depth of the overflowing bucket
is equal to the global depth, only then the directories are doubled and the global depth
is incremented by 1.
4. The size of a bucket cannot be changed after the data insertion process begins.
Advantages:
1. Data retrieval is less expensive (in terms of computing).
2. No problem of Data-loss since the storage capacity increases dynamically.
3. With dynamic changes in hashing function, associated old values are rehashed
w.r.t the new hash function.
Limitations Of Extendible Hashing:
1. The directory size may increase significantly if several records are hashed on the
same directory while keeping the record distribution non-uniform.
2. Size of every bucket is fixed.
3. Memory is wasted in pointers when the global depth and local depth difference
becomes drastic.
4. This method is complicated to code.

You might also like