Final Report
Final Report
Chapter-1
INTRODUCTION
INTRODUCTION
However these approaches are not based on any inherent attributes of an individual to
make a personal identification thus having number of disadvantages like tokens may
be lost, stolen, forgotten, or misplaced; PIN may be forgotten or guessed by impostors.
Security can be easily breached in these systems when a password is divulged to an
unauthorized user or a card is stolen by an impostor; further, simple passwords are
easy to guess (by an impostor) and difficult passwords may be hard to recall (by a
legitimate user).Therefore they are unable to satisfy the security requirements of our
The objective of our project is to implement the security system using biometric
sensors using image enhancement and minutiae extraction algorithm which is capable
of doing the matching between different digitized fingerprints of standard image file
formats namely; BMP, JPEG, tif with high level of accuracy andconfidence.
1.2 MOTIVATION
Accurate automatic personal identification is critical in wide range of application
domains such as national ID cards, electronic commerce and automatic banking.
Biometrics, which refers to automatic identification of a person based on his or her
personal physiological or behavioral characteristics, is inherently more reliable and
more capable in differentiating between a reliable person and a fraudulent impostor
than traditional methods such as PIN and passwords. Automatic fingerprint
identification and face recognition is one of the most reliable biometric technology
among the different major biometric technologies which are either currently available
or under investigation.
We propose a security system using biometric sensors using simple and effective
approach for Biometric fingerprint image enhancement and minutiae extraction based
on the frequency and orientation of the local ridges and thereby extracting correct
minutiae points.
Automatic and reliable extraction of minutiae from fingerprint images is a critical step
in fingerprint matching. The quality of input fingerprint images plays an important
role in the performance of automatic identification and verification algorithms. In this
project we presents a fast fingerprint enhancement and minutiae extraction algorithm
which improves the clarity of the ridge and valley structures of the input fingerprint
images based on the frequency and orientation of the local ridges and thereby
extracting correct minutiae.
Fingerprint based identification has been one of the most successful biometric
techniques used for personal identification. Each individual has unique fingerprints. A
fingerprint is the pattern of ridges and valleys on the finger tip. A fingerprint is thus
defined by the uniqueness of the local ridge characteristics and their relationships.
Minutiae points are these local ridge characteristics that occur either at a ridge ending
or a ridge bifurcation. A ridge ending is defined as the point where the ridge ends
abruptly and the ridge bifurcation is the point where the ridge splits into two or more
branches. Automatic minutiae detection becomes a difficult task in low quality
fingerprint images where noise and contrast deficiency result in pixel configurations
similar to that of minutiae. This is an important aspect that has been taken into
consideration in this presentation for extraction of the minutiae with a minimum error
in a particular location. A complete minutiae extraction scheme for automatic
fingerprint recognition systems is presented. The proposed method uses improving
alternatives for the image enhancement process, leading consequently to an increase of
the reliability in the minutiae extraction task.
1.3.1 BIOMETRICS
i. Enrollment module
The enrollment module is responsible for enrolling individuals into the biometric
system. During the enrollment phase, the biometric characteristic of an individual is
first scanned by a biometric reader to produce a raw digital representation of the
characteristic. In order to facilitate matching, the raw digital representation is usually
further processed by feature extractor to generate a compact but expensive
representation, called a template.
Depending on the application, the template may be stored in the central database.
Depending on the application, biometrics can be used in one of two modes:
verification or identification. Verificationalso called authenticationis used to
verify a persons identitythat is, to authenticate that individuals are who they say
they are. Identification is used to establish a persons identitythat is, to determine
who a person is.
1.3.4.1 ENROLLMENT
Either the reference template may then represent an amalgam of the captured data
or several enrollment templates may be stored. The quality of the template or
templates is critical in the overall success of the biometric application. Because
biometric features can change over time, people may have to reenroll to update their
reference template.
Some technologies can update the reference template during matching operations. The
enrollment process also depends on the quality of the identifier the enrollee presents.
The reference template is linked to the identity specified on the identification
document. If the identification document does not specify the individuals true
identity, the reference template will be linked to a false identity.
1.3.4.2VERIFICATION
In verification systems, the step after enrollment is to verify that a person is who
he or she claims to be (i.e., the person who enrolled). After the individual provides an
identifier, the biometric is presented, which the biometric system captures, generating
a trial template that is based on the vendors algorithm. The system then compares the
trial biometric template with this persons reference template, which was stored in the
system during enrollment, to determine whether the individuals trial and stored
templates match.
Verification systems can contain databases ranging from dozens to millions of enrolled
templates but are always predicated on matching an individuals presented biometric
against his or her reference template.
Nearly all verification systems can render a matchno-match decision in less than a
second.
1.3.4.3 IDENTIFICATION
In identification systems, the step after enrollment is to identify who the person is.
Unlike verification systems, no identifier is provided. To find a match, instead of
locating and comparing the persons reference template against his or her presented
biometric, the trial template is compared against the stored reference templates of all
individuals enrolled in the system. Identification systems are referred to as 1: M (one-
to-M, or one-to-many) matching because an individuals biometric is compared against
multiple biometric templates in the systems database. There are two types of
identification systems: positive and negative. Positive identification systems are
designed to ensure that an individuals biometric is enrolled in the database. The
anticipated result of a search is a match. A typical positive identification system controls
access to a secure building or secure computer by checking anyone who seeks access
against a database of enrolled employees. The goal is to determine whether a person
seeking access can be identified as having been enrolled in the system. Negative
identification systems are designed to ensure that a persons biometric information is
not present in a database. The anticipated result of a search is a no match.
Comparing a persons biometric information against a database of all who are registered
in a public benefits program, for example, can ensure that this person is not double
dipping by using fraudulent documentation to register under multiple identities.
Another type of negative identification system is a watch list system. Such systems are
designed to identify people on the watch list and alert authorities for appropriate action.
For all other people, the system is to check that they are not on the watch list and allow
them normal passage. The people whose biometrics is in the database in these systems
may not have provided them voluntarily. For instance, for a surveillance system, the
biometric may be faces captured from mug shots provided by a law enforcement
agency.
A growing number of biometric technologies have been proposed over the past
several years, but only in the past 5 years have the leading ones become more widely
deployed.
Some technologies are better suited to specific applications than others, and some
are more acceptable to users. We describe seven leading biometric technologies:
i. FacialRecognition
ii. FingerprintRecognition
iii. HandGeometry
iv. IrisRecognition
v. SignatureRecognition
vi. SpeakerRecognition
Fingerprint recognition is one of the best known and most widely used biometric
technologies. Automated systems have been commercially available since the early
1970s, and at the time of our study, we found there were more than 75 fingerprint
recognition technology companies. Until recently, fingerprint recognition was used
primarily in law enforcement applications.
Among all biometric traits, fingerprints have one of the highest levels of reliability and
have been extensively used by forensic experts in criminal investigations. A fingerprint
refers to the flow of ridge patterns in the tip of the finger. The ridge flow exhibits
anomalies in local regions of the fingertip (Figure), and it is the position and orientation
of these anomalies that are used to represent and match fingerprints.
The electronic era has ushered in a range of compact sensors that provide digital
images of these patterns. These sensors can be easily incorporated into existing
computer peripherals like the mouse or the keyboard (figure), thereby making this
mode of identification a very attractive proposition. This has led to the increased use of
automatic fingerprint-based authentication systems in both civilian and law
enforcement applications.
1.3.9 MINUTIAE
But the focus when matching is only on the 2 main minutiae; ridge ending and ridge
bifurcation.
For the implementation of this project we require MATLAB software .And here we are
using Matlab R2013a ,where R2013a stands for its version.
In 2004, MATLAB had around one million users across industry and
academia. MATLAB users come from various backgrounds of engineering, science,
and economics.
i. RAM :
We require 2-4 GB RAM for completing the implementation part of this project.
We require 4-6 GB Hard Disk for completing the implementation part of this project.
This project is compatible upto microsoft windows 7.As we are using the biometric
device named NITGEN FINGKEY HAMSTER whose driver is available for window
7 and below versions. If we choose to use some different device which is compatible
with above versions and various operating systems then we can make this project work
with any kind of operating system and their higher versions.
DESCRIPTION:
CHAPTER -2
LITERATURE REVIEW
This is the table showing the research papers that we have referred and summary of the
concepts we have studied from those research papers
steps.
Research on biometric methods has gained renewed attention in recent years brought on
by an increase in security concerns. The recent world attitude towards terrorism has
influenced people and their governments to take action and be more proactive in
security issues. This need for security also extends to the need for individuals to protect,
among other things, their working environments, homes, personal possessions and
assets. Many biometric techniques have been developed and are being improved with
the most successful being applied in everyday law enforcement and security
applications. Biometric methods include several state-of-the-art techniques. Among
them, fingerprint recognition is considered to be the most powerful technique for utmost
security authentication. Advances in sensor technology and an increasing demand for
biometrics are driving a burgeoning biometric industry to develop new technologies. As
commercial incentives increase, many new technologies for person identification are
being developed, each with its own strengths and weaknesses and a potential.
In [1] research paper given by Ravi Subban and Dattatreya P. Mankame, we have
studied about the introduction part of this project. And we have also studied about the
various technologies which can be used to implement this project. Various techniques
are as follows:
i. Correlation-based matching:
Two fingerprintimages are superimposed and the correlation between corresponding
pixels is computed for different alignments (e.g. various displacements and rotations).
We have visited different sites to view about the MATLAB software and its basics,
functioning of the software, working with graphics etc. One of the sites which helped us
to know the major part of MATLAB is Mathworks.
The detail explanation about what is Mathworks and the things we find there are given
below:
2.3.1 MATHWORKS
As of June 2016, it employed over 3,600 people worldwide with 70% located at the
company's headquarters in Natick, Massachusetts, United States. MathWorks refers to
its corporate social responsibility program as its "Social Mission," which has five
components: Investments in Education, Staff-Driven Initiatives, Local Community
Support, Green Initiatives and Disaster Relief.[10] The company annually sponsors a
number of student engineering competitions, including EcoCAR, an advanced vehicle
technology competition created by the United States Department of Energy (DOE)
and General Motors (GM). MathWorks sponsors museums and science learning centers
such as the Boston Museum of Science (since 1991)[11] and the Cambridge Science
Center in the United Kingdom.[12] It also is a supporter of public broadcasting,
including National Public Radio (NPR)'s Here and Now program.[13] The company
website gathered contributions to the 2010 Haiti earthquake relief efforts.[14]
In December 2012, Uniloc filed a lawsuit against MathWorks over alleged patent
infringements in the activation software that MathWorks uses for MATLAB licenses to
combat piracy.
Using this website, we studied the basics of MATLAB, functioning of the tools in the
software, designing the graphic interface. These are explained below:
i. What is MATLAB?
Millions of engineers and scientists worldwide use MATLAB to analyze and design
the systems and products transforming our world. MATLAB is in automobile active
safety systems, interplanetary spacecraft, health monitoring devices, smart power grids,
and LTE cellular networks. It is used for machine learning, signal processing, image
processing, computer vision, communications, computational finance, control design,
robotics, and much more.
ii. Math.Graphics.Programming.
The MATLAB platform is optimized for solving engineering and scientific problems.
The matrix-based MATLAB language is the worlds most natural way to express
computational mathematics. Built-in graphics make it easy to visualize and gain insights
from data. A vast library of prebuilt toolboxes lets you get started right away with
algorithms essential to your domain. The desktop environment invites experimentation,
exploration, and discovery. These MATLAB tools and capabilities are all rigorously
tested and designed to work together.
Chapter-3
PROPOSED SOLUTION
AND METHODOLOGY
PROPOSED SOLUTION
Research on biometric methods has gained renewed attention in recent years brought on
by an increase in security concerns. The recent world attitude towards terrorism has
influenced people and their governments to take action and be more proactive in
security issues. This need for security also extends to the need for individuals to protect,
among other things, their working environments, homes, personal possessions and
assets. Many biometric techniques have been developed and are being improved with
the most successful being applied in everyday law enforcement and security
applications.
A growing number of biometric technologies have been proposed over the past several
years, but only in the past 5 years have the leading ones become more widely deployed.
Some technologies are better suited to specific applications than others, and some
are more acceptable to users. We describe seven leading biometric technologies:
i. FacialRecognition
ii. FingerprintRecognition
iii. HandGeometry
iv. IrisRecognition
v. SignatureRecognition
vi. SpeakerRecognition
not assured with perfect quality, those enhancement methods, for increasing the contrast
between ridges and valleys and for connecting the false broken points of ridges due to
insufficient amount of ink, are very useful for keep a higher accuracy to fingerprint
recognition.
Originally, the enhancement step was supposed to be done using the canny edge
detector. But after trial, it turns out that the result of an edge detector is an image with
the borders of the ridges highlighted. Using edge detection would require the use of an
extra step to fill out the shapes which would consume more processing time and would
increase the complexity of the code, as shown in figure [3.1]
3.2 HISTOGRAMEQUALIZATION
Histogram equalization is to expand the pixel value distribution of an image so as to
increase the perceptional information. The original histogram of a fingerprint image is
shown in [Figure 3.2], the histogram after the histogram equalization is shown in
[Figure3.3]
In order to enhance a specific block by its dominant frequencies, we multiply the FFT of
the block by its magnitude a set of times. Where the magnitude of the original FFT =
abs(F(u,v)) = |F(u,v)|
(2)
-1
where F (F(u,v)) is done by:
(3)
3.2.2FINGERPRINT IMAGEBINARIZATION
The binarization step is basically stating the obvious, which is that the true
information that could be extracted from a print is simply binary; ridges vs. valleys. But
it is a really important step in the process of ridge extracting, since the prints are taken
as grayscale images, so ridges, knowing that theyre in fact ridges, still vary in
intensity. So, binarization transforms the image from a 256-level image to a 2-level
image that gives the sameinformation.
The difficulty in performing binarization is that not all the fingerprint images have the
same contrast characteristics, so a single intensity threshold (global thresholding) cannot
be chosen.
To extract the ROI, a two-step method is used. The first step is block direction
estimation and direction variety check, while the second is done using some
Morphologicalmethods.
i. Calculates the gradient values along x-direction (g x) and y-direction (gy) for each
pixel of the block. Two Sobel filters are used to fulfill thetask.
ii. For each block, it uses the following formula to get the Least Square approximation
of the blockdirection.
The formula is easy to understand by regarding gradient values along x-direction and y-
direction as cosine value and sine value. So the tangent value of the block direction is
estimated nearly the same as the way illustrated by the following formula:
After finishing with the estimation of each block direction, those blocks without significant
information (ridges) are discarded based on the following formulas:
For each block, if its certainty level (E) is below a threshold, then the block is regarded
as a backgroundblock.
The direction map is shown in the following diagram. We assume there is only one
fingerprint in each image.
Figure [3.7] shows the interest fingerprint image area and its bound. The bound is the
subtraction of the closed area from the opened area. Then the algorithm eliminates those
leftmost, rightmost, uppermost and bottommost blocks out of the bound so as to get the
tightly bounded region just containing the bound and innerarea.
3.3.2 MINUTIAMARKING
After the fingerprint ridge thinning, marking minutia points is relatively easy. The
concept of Crossing Number (CN) is widely used for extracting the minutiae.
In general, for each 3x3 window, if the central pixel is 1 and has exactly 3 one-value
neighbors, then the central pixel is a ridge branch [Figure 3.9]. If the central pixel is 1
and has only 1 one-value neighbor, then the central pixel is a ridge ending [Figure 3.10],
i.e., for a pixel P, if Cn(P) = = 1 its a ridge end and if Cn(P) = = 3 its a ridge
bifurcation point.
0 1 0 0 0 0
0 1 0 0 1 0
1 0 1 0 0 1
Fig[3.9]Bifurcation Fig[3.10]Termination
0 1 0
0 1 1
1 0 0
Figure 3.11 illustrates a special case that a genuine branch is triple counted. Suppose
both the uppermost pixel with value 1 and the rightmost pixel with value 1 have another
neighbor outside the 3x3 window, so the two pixels will be marked as branches too, but
actually only one branch is located in the small region. So a check routine requiring
that none of the neighbors of a branch are branches isadded.
Also the average inter-ridge width D is estimated at this stage. The average inter-ridge
width refers to the average distance between two neighboring ridges. The way to
approximate the D value is simple. Scan a row of the thinned ridge image and sum up
all pixels in the row whose values are one. Then divide the row length by the above
summation to get an inter-ridge width. For more accuracy, such kind of row scan is
performed upon several other rows and column scans are also conducted, finally all the
inter-ridge widths are averaged to get the D.
Together with the minutia marking, all thinned ridges in the fingerprint image are
labeled with a unique ID for further operation.
3.4 METHODOLOGY
In the given data flow diagram we describe the scanning and the matching process of
the thumb expression through the biometrics.
First the thumb expression is extracted through the scanner. If the thumb expression is
not enrolled then first the enrolling phase is done and the expression is saved in our
database.
If it exist then we get the message the finger print exist and enrolment is not done again.
the persone scans its thumb expression and the verification is done whether it exists in
the database. If it exists then verification is done and message appears fingerprint
matched.
Information Technology Department, SRMSCET&R, Bly Page
Security System Using Biometric Sensors
For fingerprint acquisition, optical or semi-conduct sensors are widely used. They have
high efficiency and acceptable accuracy except for some cases that the users finger is
too dirty or dry. However, the testing database for my project consists of scanned
fingerprints using the ink and paper technique because this method introduces a high
level of noise to the image and the goal of designing a recognition system is to work
with the worst conditions to get the bestresults.
The minutia extractor and minutia matcher modules are explained in detail later on in
this paper.
For the fingerprint image preprocessing stage, Histogram Equalization and Fourier
Transform are used to do image enhancement. And then the fingerprint image is
binarized using the locally adaptive threshold method. The image segmentation task is
fulfilled by a three-step approach: block direction,estimation, segmentation by
direction intensity and Region of Interest extraction by Morphological operations.
For minutia extraction stage, iterative parallel thinning algorithm is used. The minutia
marking is a relatively simple task. For the post-processing stage, a more rigorous
algorithm is developed to remove falseminutia.
The minutia matcher chooses any two minutiae as a reference minutia pair and then
matches their associated ridges first. If the ridges match well, the two fingerprint images
are aligned and matching is conducted for all the remainingminutiae.
It includes two sub-domains: one is fingerprint verification and the other is fingerprint
identification.
CHAPTER-4
IMPLEMENTATION AND
OUTCOMES
4.1 IMPLEMENTATION
debug = 0;
[rows,cols] = size(im);
orientim = 2*orientim(:);
cosorient = mean(cos(orientim));
sinorient = mean(sin(orientim));
orient = atan2(sinorient,cosorient)/2;
% Sum down the columns to get a projection of the grey values down
% the ridges.
proj = sum(rotim);
dilation = ordfilt2(proj, windsze, ones(1,windsze));
maxpts = (dilation == proj) & (proj > mean(proj));
maxind = find(maxpts);
if length(maxind) < 2
freqim = zeros(size(im));
else
NoOfPeaks = length(maxind);
waveLength = (maxind(end)-maxind(1))/(NoOfPeaks-1);
if waveLength > minWaveLength & waveLength < maxWaveLength
freqim = 1/waveLength * ones(size(im));
else
freqim = zeros(size(im));
end
end
if debug
%show(im,1)
%show(rotim,2);
figure(3), plot(proj), hold on
meanproj = mean(proj)
if length(maxind) < 2
fprintf('No peaks found\n');
else
plot(maxind,dilation(maxind),'r*'), hold off
waveLength = (maxind(end)-maxind(1))/(NoOfPeaks-1);
end
end
iv.<Program to normalise the coloured image todesired mean and
variance>:
if ~(nargin == 1 | nargin == 3)
error('No of arguments must be 1 or 3');
end
if nargin == 1 % Normalise 0 - 1
if ndims(im) == 3 % Assume colour image
hsv = rgb2hsv(im);
v = hsv(:,:,3);
v = v - min(v(:)); % Just normalise value component
v = v/max(v(:));
hsv(:,:,3) = v;
n = hsv2rgb(hsv);
else% Assume greyscale
if ~isa(im,'double'), im = double(im); end
n = im - min(im(:));
n = n/max(n(:));
end
n = reqmean + im*sqrt(reqvar);
end
if ~(nargin == 1 | nargin == 3)
error('No of arguments must be 1 or 3');
end
if nargin == 1 % Normalise 0 - 1
if ndims(im) == 3 % Assume colour image
hsv = rgb2hsv(im);
v = hsv(:,:,3);
v = v - min(v(:)); % Just normalise value component
v = v/max(v(:));
hsv(:,:,3) = v;
n = hsv2rgb(hsv);
else% Assume greyscale
if ~isa(im,'double'), im = double(im); end
n = im - min(im(:));
n = n/max(n(:));
end
n = reqmean + im*sqrt(reqvar);
end
%---------------------------------
%perform diffusion
%---------------------------------
h = fspecial('gaussian',2*N+1);
cycles = 0;
invalid_cnt = sum(sum(fimg<RLOW | fimg>RHIGH));
while((invalid_cnt>0 &cycles < diff_cycles) | cycles < diff_cycles)
%---------------
%pad the image
%---------------
fimg = [flipud(fimg(1:N,:));fimg;flipud(fimg(ht-N+1:ht,:))]; %pad the rows
fimg = [fliplr(fimg(:,1:N)),fimg,fliplr(fimg(:,wt-N+1:wt))]; %pad the cols
%---------------
%perform diffusion
%---------------
for i=N+1:ht+N
for j = N+1:wt+N
blk = fimg(i-N:i+N,j-N:j+N);
msk = (blk>=RLOW & blk<=RHIGH);
if(sum(sum(msk))>=valid_nbrs)
blk =blk.*msk;
nfimg(i-N,j-N)=sum(sum(blk.*h))/sum(sum(h.*msk));
else
nfimg(i-N,j-N)=-1; %invalid value
end;
end;
end;
%---------------
%prepare for next iteration
%---------------
fimg = nfimg;
invalid_cnt = sum(sum(fimg<RLOW | fimg>RHIGH));
cycles = cycles+1;
end;
<Program to build database>
ICount = 9;
JCount = 8;
p=0;
for i=1:ICount
for j=1:JCount
filename=['10' num2str(i) '_' num2str(48) '.tif'];
img = imread(filename); p=p+1;
if ndims(img) == 3; img = rgb2gray(img); end% colour image
disp(['extracting features from ' filename ' ...']);
ff{p}=ext_finger(img,1);
end
end
save('db.mat','ff');
end
end
[ binim, mask, cimg, cimg2, orient_img, orient_img_m ] = f_enhance(img);
% Making Mask -------------------------------------------------------------
%%if display_flag==1; fprintf('done.\n >>> making mask '); end
mask_t=mask;
for y=19:size(mask,1)-block_size_c*2
for x=block_size_c:size(mask,2)-block_size_c*2
n_mask = 0;
for yy=-1:1
for xx=-1:1
y_t = y + yy *block_size_c;
x_t = x + xx *block_size_c;
if y_t > 0 && x_t > 0 && (y_t ~= y || x_t ~= x) && mask(y_t,x_t) == 0
n_mask = n_mask + 1;
end
end
end
if n_mask == 0
continue
end
if mask(y,x) == 0 || y > size(mask,1) - 20 || y < yt || y > yb || x < xl || x > xr
cimg2(ceil(y/(block_size_c)), ceil(x/(block_size_c))) = 255;
mask_t(y,x) = 0;
continue;
end
for i = y:y+1
for j = x-9:x+9
if i > 0 && j > 0 && i < size(mask,1) && j < size(mask,2) && mask(i,j) > 0
else
cimg2(ceil(y/(block_size_c)), ceil(x/(block_size_c))) = 255;
mask_t(y,x)=0;
break
end
end
end
end
end
mask=mask_t;
inv_binim = (binim == 0);
thinned = bwmorph(inv_binim, 'thin',Inf);
mask_t=mask;
if numel(find(mask(125:150,150:250)>0)) > 0 &&
numel(find(mask(250:275,150:250)>0)) > 0
mask(150:250,150:250)=1;
end
method=-1; core_y = 0; core_x = 0; core_val=0; lc=0;
o_img=sin(orient_img); o_img(mask == 0) = 1;
lower_t=0.1;
[v,y]=min(cimg);
[dt1,x]=min(v);
delta1_y=y(x)*block_size_c/2; delta1_x=x*block_size_c/2;
v(x)=255; v(x+1)=255;
[dt2,x]=min(v);
delta2_y=y(x)*block_size_c/2; delta2_x=x*block_size_c/2;
v(x)=255; v(x+1)=255;
[dt3,x]=min(v);
delta3_y=y(x)*block_size_c/2; delta3_x=x*block_size_c/2;
db=60;
if dt1 < 1 && delta1_y+db < core_y && delta1_y > 15 || dt2 < 1 && delta2_y+db <
core_y && delta2_y > 15 || dt3 < 1 && delta3_y+db < core_y && delta3_y > 15
core_val=255;
end
for y=10:size(o_img,1)-10
for x=10:size(o_img,2)-10
i=i+1;
end
end
if min(s2) < lower_t
s2=sum(s2)/6;
else
s2=s1;
end
s3=[]; i=1;
for a=y:y+2
for b=x+2:x+3
s3(i)=o_img(a,b);
i=i+1;
end
end
if min(s3) < lower_t
s3=sum(s3)/6;
else
s3=s1;
end
s4=[]; i=1;
for a=y:y+2
for b=x-2:x-1
s4(i)=o_img(a,b);
i=i+1;
end
end
if min(s4) < lower_t
s4=sum(s4)/6;
else
s4=s1;
end
s5=[];
i=1;
for a=y-3:y-1
for b=x-2:x-1
s5(i)=o_img(a,b);
i=i+1;
end
end
if min(s5) < lower_t
s5=sum(s5)/6;
else
s5=s1;
end
s6=[]; i=1;
for a=y-3:y-1
for b=x+2:x+3
s6(i)=o_img(a,b);
i=i+1;
end
end
if min(s6) < lower_t
s6=sum(s6)/6;
else
s6=s1;
end
if s1-s2 > core_val
core_val=s1-s2;
core_x=x;
core_y=y;
lc=o_img(y,x);
method=1;
end
if s1-s3 > core_val
core_val=s1-s3;
core_x=x;
core_y=y;
lc=o_img(y,x);
method=2;
end
if x < 300 && s1-s4 > core_val
core_val=s1-s4;
core_x=x;
core_y=y;
lc=o_img(y,x);
method=3;
end
if x < 300 && s1-s5 > core_val
core_val=s1-s5;
core_x=x;
core_y=y;
lc=o_img(y,x);
method=4;
end
if s1-s6 > core_val
core_val=s1-s6;
core_x=x;
core_y=y;
lc=o_img(y,x);
method=5;
end
end
end
if core_y > 37
yt=20;
else
yt=5;
end
test_smooth = 100;
if core_y > 0
test_smooth= sum(sum(o_img(core_y-yt-5:core_y-yt+5,core_x-5:core_x+5)));
end
if lc > 0.41 && (test_smooth < 109.5 && method~=2 || test_smooth < 100) %&&
min(min(o_img(1:core_y-yt,core_x-10:core_x+10))) < 0.17
start_t=0;
core_val=1/(core_val+1);
else
core_x=0;
core_y=0;
core_val = 255;
end
mask=mask_t; path_len = 45;
skip=1;
end
end
end
if skip == 1
continue;
end
t_a=[];
c = 0;
for e=y-1:y+1
for f=x-1:x+1
c = c + 1;
t_a(c) = orient_img_m(e,f);
end
end
m_o = median(t_a); m_f = 0;
if CN == 3
[CN, prog, sx, sy,ang]=test_bifurcation(thinned, x,y, m_o, core_x, core_y);
if prog < 3
continue
end
if ang < pi
m_o = mod(m_o+pi,2*pi);
end
else
progress=0;
xx=x; yy=y; pao=-1; pos=0;
while progress < 15 && xx > 1 && yy > 1 && yy<size(img,1) && xx<size(img,2) &&
pos > -1
pos=-1;
for g = 1:8
[ta, xa, ya] = p(thinned, xx, yy, g);
[tb, xb, yb] = p(thinned, xx, yy, g+1);
if (ta > tb) && pos==-1 && g ~= pao
pos=ta;
if g < 5
pao = 4 + g;
else
pao = mod(4 + g, 9) + 1;
end
xx=xa; yy=ya;
end
end
progress=progress+1;
end
if progress < 10
continue
end
if mod(atan2(y-yy,xx-x), 2*pi) > pi
m_o=m_o+pi;
end
end
minutiae(minu_count, :) = [ x, y, CN, m_o, m_f, 1];
min_path_index(minu_count, :) = [sx sy];
minu_count = minu_count + 1;
end
end% if pixel white
end% for y
end% for x
rc=0;
for y=max(Y-2,1):min(Y+2, size(binim,1))
if rc > 0
break
end
for x=max(X-2,1):min(X+2, size(binim,2))
if mask(y,x) == 0
rc = rc + 1;
break
end
end
end
if rc > 0
continue;
else
t_minutiae(t_minu_count, :) = minutiae(i, :);
t_mpi(t_minu_count, :) = min_path_index(i, :);
t_minu_count = t_minu_count + 1;
end
end
minutiae = t_minutiae;
min_path_index = t_mpi;
minu_count = size(minutiae,1);
t_minu_count = 1; t_minutiae = [];
dist_m = dist2(minutiae(:,1:2), minutiae(:,1:2));
dist_test=49;
for i=1:minu_count
reject_flag = 0;
P_x = minutiae(i,1); P_y = minutiae(i,2);
for j = i + 1 : minu_count
if dist_m(i,j) <= dist_test
reject_flag = 1;
end
end
end
end
t_minutiae(t_minu_count, :) = minutiae(i, :);
t_minu_count = t_minu_count + 1;
end
end
minutiae = t_minutiae;
minu_count = t_minu_count-1;
tmpvec1 = size(img,1).*ones(minu_count,1);
tmpvec2 = ones(minu_count,1);
minutiae_for_sc = [minutiae(:,1)/size(img,2) (tmpvec1 - minutiae(:,2) +
tmpvec2)/size(img,1)];
dist_m = sqrt(dist2(minutiae_for_sc(:,1:2), minutiae_for_sc(:,1:2)));
for i=1:minu_count
[d,ind] = sort(dist_m(i,:));
for j = 1 : minu_count
if dist_m(i,ind(j)) == 0
continue
end
theta_t = mod(atan2(minutiae(i,2) - minutiae(ind(j),2), minutiae(i,1) -
minutiae(ind(j),1)), 2*pi);
ridge_count = 0;
p_y = minutiae(i,2); p_x = minutiae(i,1);
t_x = 0; t_y = 0;
current=1; radius = 1;
while p_y ~= minutiae(ind(j),2)
if thinned(p_y, p_x) > 0 && current == 0 && (t_x ~= p_x || t_y ~= p_y)
current = 1;
ridge_count = ridge_count + 1;
else
if thinned(p_y, p_x) == 0
current = 0;
end
end
end
end
elseif minutiae(i, 3) > 5
for k = y1-2: y1 + 2
for l = x1-2: x1 + 2
minutiae_img(k, l,:) = [128, 128, 0]; % gold for delta
end
end
end
end
combined = uint8(minutiae_img);
for x=1:size(binim,2)
for y=1:size(binim,1)
if mask(y,x) == 0
combined(y,x,:) = [0,0,0];
continue
end
if (thinned(y,x)) % binim(y,x))
combined(y,x,:) = [255,255,255];
else
combined(y,x,:) = [0,0,0];
end% end if
if ((minutiae_img(y,x,3) ~= 0) || (minutiae_img(y,x,1) ~= 0) ) || (minutiae_img(y,x,2) ~=
0)
combined(y,x,:) = minutiae_img(y,x,:);
end
end% end for y
end% end for x
if core_val < 1 && YA > 0
for k = YA-2: YA + 2
for l = XA-2: XA + 2
combined(k,l,:) = [20, 255, 250];
end
end
for k = YB-2: YB + 2
for l = XB-2: XB + 2
combined(k,l,:) = [20, 255, 250];
end
end
end
end
ret=minutiae;
end
<Main Program>
clearall; clc; addpath(genpath(pwd));
ICount = 9;
JCount = 8;
fprintf('Matching.');
load('db.mat');
%% EXTRACT FEATURES FROM AN ARBITRARY FINGERPRINT
[file path]=uigetfile('*.*','select image');
Information Technology Department, SRMSCET&R, Bly Page
Security System Using Biometric Sensors
filename=strcat(path,file);
img = imread(filename);
if ndims(img) == 3; img = rgb2gray(img); end% Color Images
ffnew=ext_finger(img,1);
%% FOR EACH FINGERPRINT TEMPLATE, CALCULATE MATCHING SCORE IN
COMPARISION WITH FIRST ONE
S=zeros(ICount*JCount,1);
fprintf('..\n');
for i=1:ICount*JCount
second=['10' num2str(fix((i-1)/8)+1) '_' num2str(mod(i-1,8)+1)];
S(i)=match(ffnew,ff{i});
drawnow
end
%% OFFER MATCHED FINGERPRINTS
Matched_FigerPrints=find(S>0.48);
if size(Matched_FigerPrints,1)>0;
disp('Fingerprint matched with database!');
else
disp('Fingerprint did not match with the database!');
end
T2=transform(M2,j);
for a=-5:5 %Alpha
T3=transform2(T2,a*pi/180);
sm=score(T1,T3);
if S<sm
S=sm;
bi=i; bj=j; ba=a;
end
end
end
end
end
if display_flag==1
figure, title(['Similarity Measure : ' num2str(S)]);
T1=transform(M1,bi);
T2=transform(M2,bj);
T3=transform2(T2,ba*pi/180);
plot_data(T1,1);
plot_data(T3,2);
end
end
if d<T
DTheta=abs(T1(i,3)-T2(j,3))*180/pi;
DTheta=min(DTheta,360-DTheta);
if DTheta<TT
n=n+1; %Increase Score
Found=1;
end
end
j=j+1;
end
end
sm=sqrt(n^2/(Count1*Count2)); %Similarity Index
end
for i=1:Count
B=T(i,:)-[0 0 alpha 0];
Tnew(i,:)=R*B';
end
end
4.2 OUTCOMES
Here are some images of the running project.
Scanning of the fingerprint is done through the biometric device. The fingerprint should
be taken properly to avoid any kind of disturbance.
In this step first the fingerprint or the thumb expression is being scanned by the scanner
for the verification or the enrollment process. The fingerprint should be kept properly to
avoid any kind of disturbance which may disturb the quality of the expression obtained.
As soon as we run the project ,we have three options after we have scanned our
fingerprint expression.
First is verification in which we select a fingerprint for enrolment or for the verification.
If the fingerprint is not in the database then we build a database for it.
Here after we select the option SELECT FINGERPRINT a window appears which has
the fingerprints already saved in the database. From all these fingerprints we select the
desired expression.
The fingerprint expression which we select appears in the box. Now we can either save
the unsaved expression into database or check the authorization of existing expression.
When we select the check authorization command then the scanner starts to verify the
fingerprint expression.
When the verification is done if the fingerprint expression matches with the one in
database then the message is displayed matched with database and if it does not
verifies then the message appears not matched with database.
CHAPTER -5
CONCLUSION AND
FUTURE SCOPE
As we can see in the graph shown below, when eliminating a step from the whole
process or changing some of the parameters, the matching process is affected.
Observations:
i. When altering in such an important step such as the image enhancement part, the
performance quality of the system dropsrapidly as the noise in the image is
increased. Because when working with a biometric identification system, obtaining
clear and noise free images is a really hard thing, so this step is usuallyneeded.
ii. For the binarization step, as explained earlier, using global thresholding may
introduce a few problems and may lead to the elimination of significant details by
mistake. Here, I tried using global thresholding, with 2 different thresholds, once
using an intensity threshold of 120 and the second time using a value of 80. As we
can see from the graph, setting the threshold at 120 (although its almost the
average value for a gray-scale image) affected the system performance a lot and led
to false non-match results, while setting a fixed threshold as low as 80 gave better
results. Still, it remains better to use the adaptive threshold method because,
although it consumes more processing time, it still guarantees the quality of
theresults.
iii. If we try to remove the H-breaks step, the system wouldnt be greatly affected and
the matching process wouldnt become harder, but its considered a preprocessing
step and it doesnt add much complexity to the system, so no harm in keeping the
accuracyhigher
The reliability of any automatic fingerprint system strongly relies on the precision
obtained in the minutia extraction process. A number of factors damage the correct
location of minutia. Among them, poor image quality is the one with most influence.
The proposed alignment-based elastic matching algorithm is capable of finding the
correspondences between minutiae without resorting to exhaustive research.
There is a scope of further improvement in terms of efficiency and accuracy which
can be achieved by improving the hardware to capture the image or by improving the
image enhancement techniques. So that the input image to the thinning stage could be
made better, this could improve the future stages and the final outcome.
Fingerprint is the cheapest, fastest, most convenient and most reliable way to identify
someone. Fingerprint authentication has many usability advantages over traditional
systems such as passwords. Today, fingerprint recognition technology is used for
security purposes, to restrict access or to protect computers.
5.3FUTURE SCOPE
Biometrics is quite rightly viewed to be at the cutting edge of security technology. From
the very first commercial application of a finger print reader in 1984, we have seen new
systems and applications introduced to the market on a regular basis; some are still
firmly in the development phase whilst others, like iris and facial recognition, are
gradually being introduced into practical installations.
In many ways, it has taken the increased threat from global terrorism and organised
crime to create an acceptance of biometric security, convincing an anxious and cynical
public that systems do not necessarily pose a threat to civil liberties, provided they are
properly controlled and effectively managed.
Recent system developments have seen a significant change in both the biometric
information being analysed and the quality of the reading and processing performance.
From the early finger print readers - which still carry with them an unfortunate
association with criminal identification, as well as some lingering doubts over the users
ability to fool the scanners - have come a range of iris, face, vein and voice technologies.
These emergent technologies are now providing specifiers and security managers with
real choice, allowing them to select the most appropriate system for their particular
needs - balancing the key variables of accuracy, quality, reliability, speed of performance
and cost.
APPENDIX
<RIDGE FILTER>
function newim = ridgefilter(im, orient, freq, kx, ky, showfilter)
if nargin == 5
showfilter = 0;
end
im = double(im);
[rows, cols] = size(im);
newim = zeros(rows,cols);
[validr,validc] = find(freq > 0); % find where there is valid frequency data.
ind = sub2ind([rows,cols], validr, validc);
for k = 1:length(unfreq)
sigmax = 1/unfreq(k)*kx;
sigmay = 1/unfreq(k)*ky;
sze(k) = round(3*max(sigmax,sigmay));
[x,y] = meshgrid(-sze(k):sze(k));
reffilter = exp(-(x.^2/sigmax^2 + y.^2/sigmay^2)/2)...
.*cos(2*pi*unfreq(k)*x);
% Find indices of matrix points greater than maxsze from the image
% boundary
maxsze = sze(1);
finalind = find(validr>maxsze & validr<rows-maxsze &...
validc>maxsze & validc<cols-maxsze);
s = sze(filterindex);
newim(r,c) = sum(sum(im(r-s:r+s, c-s:c+s).*filter{filterindex,orientindex(r,c)}));
end
<RIDGE FREQUENCY>
function [freq, medianfreq] = ridgefreq(im, mask, orient, blksze, windsze, ...
minWaveLength, maxWaveLength)
for r = 1:blksze:rows-blksze
for c = 1:blksze:cols-blksze
blkim = im(r:r+blksze-1, c:c+blksze-1);
blkor = orient(r:r+blksze-1, c:c+blksze-1);
freq(r:r+blksze-1,c:c+blksze-1) = ...
freqest(blkim, blkor, windsze, minWaveLength, maxWaveLength);
Information Technology Department, SRMSCET&R, Bly Page
Security System Using Biometric Sensors
end
end
% Find median freqency over all the valid regions of the image.
medianfreq = median(freq(find(freq>0)));
<RIDGE ORIENTATION>
function [orientim, reliability] = ...
ridgeorient(im, gradientsigma, blocksigma, orientsmoothsigma)
[rows,cols] = size(im);
reliability = 1 - Imin./(Imax+.001);
<RIDGE SEGMENT>
function [normim, mask, maskind] = ridgesegment(im, blksze, thresh)
fun = inline('std(x(:))*ones(size(x))');
% Renormalise image so that the *ridge regions* have zero mean, unit
% standard deviation.
im = im - mean(im(maskind));
normim = im/std(im(maskind));
Images of mathworks
REFERENCES
[3]MathWorks:https://www.google.co.in/?
gfe_rd=cr&ei=9DgTWfi2LvDs8Afoj4HACA#q=mathworks.