22OBM103 Biometrics and Its Application End Semester Key
22OBM103 Biometrics and Its Application End Semester Key
Advantages:
1. Enhanced Accuracy
2. Improved Security
3. Robustness to Variability
4. Increased Flexibility
5. Resistance to Spoofing
Limitations:
1. Complex Integration
2. Increased Storage and Processing Requirements
3. User Acceptance
4. Interoperability Issues
recognition system.
Text-Dependent Voice Text-Independent Voice
Aspect
Recognition Recognition
Requires the speaker to utter specific Authenticates the speaker based on
Definiti
predefined phrases or passwords for their unique voice patterns without
on
authentication. specific prompts.
Used in applications where users
Commonly used for verification in
cannot be prompted with specific
Usage controlled environments like secure
phrases, such as voice commands
access applications.
and voice search.
PART - B (4 x 16 marks = 64 marks) Marks CO CO
leve
11(a) Describe the architecture of biometrics system with their functional diagrams. CO1 An
Biometrics are computerized methods of identifying a person based on physiological and
observable qualities. The use of biometric systems has impacted the way we identify and
authenticate ourselves around the world. By utilizing this technology, not only has the
identification of people changed, but also the time it takes to identify and verify people
has been significantly reduced. Face, fingerprints, handwriting, palmprints, hand
geometry, gait, iris, retinal, and voice are the various characteristics that are measured in
biometric techniques.
1. Sensor: The sensor is the first block of the biometric system which collects all the
important data for biometrics. It is the interface between the system and the real world.
Typically, it is an image acquisition system, but it depends on the features or
characteristics required that it has to be replaced or not.
2. Pre-processing: It is the second block that executes all the pre-processing. Its
function is to enhance the input and to eliminate artifacts from the sensor, background
noise, etc. It performs some kind of normalization.
3. Feature extractor: This is the third and the most important step in the biometric
system. Extraction of features is to be done to identify them at a later stage. The goal of
a feature extractor is to characterize an object to be recognized by measurements.
4. Template generator: The template generator generates the templates that are used for
authentication with the help of the extracted features. A template is a vector of numbers
or an image with distinct tracts. Characteristics obtained from the source groups come
together to form a template. Templates are being stored in the database for comparison
and serve as input for the match.
5. Matcher: The matching phase is performed by the use of a match. In this part, the
procured template is given to a matcher that compares it with the stored templates using
various algorithms such as Hamming distance, etc. After matching the inputs, the results
will be generated.
6. Application device: It is a device that uses the results of a biometric system. The Iris
recognition system and facial recognition system are some common examples of
application devices.
(OR)
11(b) Discuss the different errors associated with biometric systems and their 16 CO1 An
performance measures.
Biometric identification systems utilize unique physiological or behavioral characteristics
of individuals for identification purposes. These systems capture and analyze biometric
data to verify or recognize individuals. Here's an overview of the identification process in
biometrics:
Capture: When an individual seeks to be identified by the system, their biometric trait is
captured using sensors or devices. For example, a fingerprint scanner may capture the
unique patterns of a person's fingerprint, or a facial recognition system may capture the
facial features of the individual from a photograph or video feed
.
Extraction: The captured biometric data is then processed to extract key features or
characteristics that are unique to the individual. This process involves algorithms that
analyze the biometric trait and convert it into a standardized format for comparison.
Comparison: The extracted features are compared against the reference templates stored in
the system's database. The system determines the similarity between the captured
biometric data and the stored templates using matching algorithms. If a match is found
above a certain threshold, the individual is identified.
Decision: Based on the comparison results, the system makes a decision regarding the
identity of the individual. If the similarity score exceeds a predefined threshold and meets
certain criteria, the individual is positively identified. Otherwise, the identification attempt
may be rejected or flagged for further review.
Feedback: The system provides feedback to the user regarding the identification outcome.
This feedback may include displaying the individual's identity, granting access to a secure
area or system, or notifying security personnel of a potential security threat.
Biometric identification systems are widely used in various applications, including access
control, border security, law enforcement, banking, healthcare, and authentication for
electronic devices and online services. They offer a convenient and secure means of
identifying individuals based on their unique biometric traits, enhancing security and
efficiency in various domains.
In biometric systems, errors can occur during the process of identification or verification,
impacting the system's performance. Various performance measures are used to evaluate
the accuracy and reliability of biometric systems. Here's an overview of biometric system
errors, performance measures, along with their merits and demerits:
Errors in Biometric Systems:
False Acceptance Rate (FAR): This error occurs when the system incorrectly identifies an
impostor as a genuine user. It indicates the likelihood of unauthorized access being
granted.
False Rejection Rate (FRR): This error occurs when the system incorrectly rejects a
genuine user. It indicates the likelihood of legitimate users being denied access.
Performance Measures:
Accuracy: Accuracy measures how effectively the biometric system distinguishes between
genuine users and impostors. It is often expressed as the overall correct identification rate.
Equal Error Rate (EER): EER is the point where FAR and FRR are equal. It provides a
single threshold at which the system balances the risk of false acceptance and false
rejection.
Detection Error Tradeoff (DET) Curve: DET curve plots FAR against FRR on logarithmic
axes, providing a comprehensive view of the system's performance across different
operating points.
Merits:
Security Enhancement: Biometric systems offer enhanced security by providing a more
reliable method of user authentication compared to traditional methods such as passwords
or PINs.
Efficiency: Biometric systems can process large volumes of data rapidly, enabling quick
identification and authentication of individuals in real-time.
Cost: Implementing biometric systems can be costly due to the need for specialized
hardware, software, infrastructure, and maintenance.
Reliability: Biometric systems may encounter reliability issues due to factors such as
environmental conditions, variations in biometric traits, or technical failures.
systems.
Palm vein authentication uses the blood vessel patterns of the palm vein in the
subcutaneous tissue of the human body to discriminate between individuals. Palm vein
patterns are captured by the camera with near-infrared light. When a hypodermic vein is
irradiated with near-infrared light, the reduced haemoglobin contained in the vein absorbs
near-infrared light and the hypodermic vein creates a shadow on an image. The shadow
pattern is then extracted from the captured image of the palm vein pattern using image
processing technology. The resulting vein patterns are compared using vessel structure
features such as directions and bifurcations, or by using the patterns themselves. In
practical terms, palm vein patterns of a hand are used for authentication because such parts
of the hand are easy to expose to a sensor.
The palm vein authentication technology has been deployed for its ease-of-use and
assurance that it has given users through its robust security. It has been widely adopted
worldwide for personal identification at financial institutions, as a computer login and
room entrance control method at corporations.
figure 1.
Palm vein authentication systems generally use optical palm vein sensors for enrolment of
the palm vein image or palm vein pattern. In enrolment, users put their palm above the
sensor (figure 1). The sensor detects a palm, and emits near-infrared light as figure 1.
When near-infrared light is illuminated to the palm, the rays will be scattered in the hand
after penetrating the surface of the hand. The illuminated light of a near infrared light
enters inside the palm as incident light. The incident light is absorbed and scattered in the
palm.
A part of incident light is also absorbed within the vein. Deoxidized hemoglobin absorbs
near-infrared rays at a wavelength of approximately 760 (nm). The palm vein sensor
device emits near-infrared ray and photographs the light that is scattered back from the
palm.
(OR)
12(b) Explain the principle of Minutiae Based Extraction in Fingerprint Recognition 16 CO2 Ap
system.
Advantages:
Accuracy: Minutiae points are stable and persistent over time, providing high
accuracy in fingerprint recognition.
Storage Efficiency: Templates based on minutiae points are compact, requiring
less storage space.
Robustness: Resistant to small variations in fingerprint images due to factors like
rotation, translation, and slight deformation.
Limitations:
13(a) Discuss the role of neural networks in face recognition used in biometric systems. 16 CO3 Ap
neural network for face Recognition
The NN is an interconnection of neurons which are cells the network. the computer
simulation of human brain. It is just like a human brain, consisting of millions of neurons
connected together in different layers.
When a human brain receives ideas, the thinking process starts.The idea is then selected
and decision is made and is converted into action. The NN is arranged in different layers:
input, output and hidden. There may be multiple hidden layers.
Each layer has a certain number of nodes or neurons with some value of weight. For
example, whenever we are in deep thought or in trouble, we place our fingers on our head
and try to press the fore
head and then act. In a similar manner, the NN is trained.
(OR)
The principle of face detection in video sequences involves the automatic identification
and localization of human faces across multiple frames of a video.
1. Frame-by-Frame Analysis:
o Image Preprocessing: Each frame of the video undergoes preprocessing
steps such as resizing, grayscale conversion, and noise reduction to enhance
the clarity of facial features.
2. Feature Extraction:
o Feature-Based Methods: Algorithms look for distinctive facial features
like eyes, nose, mouth, and their spatial relationships.
o Machine Learning Techniques: Classifiers trained on labeled datasets
(e.g., Haar cascades, Viola-Jones method) detect patterns indicative of
faces.
3. Detection Algorithms:
o Sliding Window Approach: A window of fixed size slides across the
image at various scales, checking for the presence of face-like patterns.
o Convolutional Neural Networks (CNNs): Deep learning models trained
to recognize faces by learning hierarchical features.
4. Localization:
o Detected faces are localized by drawing bounding boxes around them,
indicating their position (x, y coordinates) and size (width, height) within
each frame.
5. Tracking Across Frames:
o Advanced algorithms track identified faces across consecutive frames to
maintain continuity and reduce redundant detections.
Example:
Consider a video surveillance system that detects faces in real-time to monitor a public
space.
Input: The system continuously receives video feeds from cameras placed in the
monitored area.
Processing: Each frame of the video undergoes preprocessing to prepare it for face
detection. This includes resizing the frame, converting it to grayscale, and
enhancing contrast.
Face Detection: Using a pre-trained face detection model (e.g., Haar cascades or a
CNN-based model), the system analyzes each frame to detect faces. The model
identifies regions of interest where faces are likely to be present based on learned
patterns.
Localization: Once a face is detected, the system draws a bounding box around it
in the frame, indicating the position and size of the detected face.
Tracking: If the same person appears in subsequent frames, the system tracks their
movement by maintaining the identity of detected faces across frames. This
tracking helps in monitoring the person's activities over time.
Output: The system outputs real-time alerts or records video segments containing
detected faces for further analysis or action by security personnel.
14(a) 16 CO5 An
Describe the basic architecture of multi model biometric systems with their
associated functions.
All the biometric systems we discussed till now were unimodal, which take single source
of information for authentication. As the name depicts, multimodal biometric systems
work on accepting information from two or more biometric inputs.
A multimodal biometric system increases the scope and variety of input information the
system takes from the users for authentication.
The unimodal systems have to deal with various challenges such as lack of secrecy, non-
universality of samples, extent of user’s comfort and freedom while dealing with the
system, spoofing attacks on stored data, etc.
Multimodal biometric system has all the conventional modules a unimodal system has −
Capturing module
Feature extraction module
Comparison module
Decision making module
In addition, it has a fusion technique to integrate the information from two different
authentication systems. The fusion can be done at any of the following levels −
Within a multimodal biometric system, there can be variety in number of traits and
components. They can be as follows −
A multimodal biometric system that combines face and ear recognition for identifying
individuals leverages the unique advantages of each biometric modality. Here’s how such
a system might function:
1. Face Recognition:
o Capture and Processing: The system captures an image of the person’s
face using a camera. Facial recognition algorithms detect key facial features
such as the eyes, nose, and mouth.
o Feature Extraction: Unique facial characteristics, such as distances
between features and overall facial structure, are extracted and converted
into a digital template.
o Matching: During verification or identification, the captured face image is
compared against stored face templates in the database using matching
algorithms. The system calculates a similarity score to determine if there is
a match.
2. Ear Recognition:
o Capture and Processing: An additional capture device, such as an ear
scanner or high-resolution camera, captures the image of the person’s ear.
o Feature Extraction: Ear recognition algorithms analyze specific features
of the ear, such as the shape, size, and contours. Key points like the earlobe,
helix, and tragus are identified.
o Matching: Similar to face recognition, the extracted ear features are
compared against stored ear templates in the database. Matching algorithms
compute a similarity score for verification or identification purposes.
3. Integration and Decision Making:
o Combined Score: The multimodal system integrates the similarity scores
from both face and ear recognition processes. Various fusion techniques,
such as score-level fusion (averaging scores) or decision-level fusion
(voting mechanisms), can be used to combine the results.
o Decision Making: Based on the combined score and a predefined
threshold, the system makes a decision to accept or reject the person’s
identity.
Challenges:
Integration Complexity: Developing and integrating algorithms for both face and
ear recognition can be complex, requiring synchronization and calibration between
different capture devices.
Data Storage and Processing: Storing and processing biometric templates from
multiple modalities may require increased storage capacity and computational
resources.
Environmental Variability: External factors such as lighting conditions and
background noise can affect the performance of both face and ear recognition
algorithms.
1. Image Capture:
o The process begins with capturing an image of the person's hand
using a specialized scanner or camera designed for hand geometry
recognition.
o The scanner typically captures multiple views of the hand,
including the top, side, and fingers, to gather comprehensive
geometric data.
2. Feature Extraction:
o Algorithms process the captured images to extract key geometric
features from the hand. These features include:
Finger Lengths: Measurements of the lengths of fingers
from the base to the tip.
Finger Widths: Widths of individual fingers, including the
thickness.
Finger Positions: Positions of fingers in relation to each
other and to the palm.
Hand Shape: Overall shape and size of the hand, including
the palm and knuckles.
Creases and Knuckle Patterns: Patterns formed by the
creases and knuckles on the palm and fingers.
3. Template Creation:
o The extracted features are converted into a digital template or code
that represents the unique hand geometry of the individual.
o This template is typically a set of numerical values or a geometric
model that captures the relative positions and measurements of the
identified features.
4. Storage and Database Management:
o The digital templates of hand geometry data are stored securely in a
database. Each template is associated with the corresponding
individual's identity or user account.
5. Matching Process:
o During verification or identification, the person places their hand on
the scanner again.
o The system captures a new image of the hand and extracts its
features using the same algorithms.
o The extracted features are compared with the stored templates in
the database using matching algorithms.
o Matching algorithms calculate a similarity score or distance metric
to determine if the captured hand geometry matches any of the
stored templates.
6. Decision Making:
o Based on the matching results and a predefined threshold, the
system makes a decision to accept or reject the person's identity.
o In verification scenarios (one-to-one matching), the system verifies
if the captured hand matches the stored template of the claimed
identity.
o In identification scenarios (one-to-many matching), the system
searches the entire database to find a match for the captured hand.
Limitations: