Pattern Recognition
Pattern Recognition
Department of Computer Science Engineering, Research scholar RCET, Bhilai (C.G) India,
2
Department of Computer Science Engineering, Dean (R&D) RCET, Bhilai, (C.G) India
ABSTRACT
This paper propose signature recognition system based on centre of gravity,hough transform and neural
network for offline signature. Similar to other biometric measures, signatures have inherent variability and so
pose a difficult recognition problem.. In this paper, signature is preprocessed through binarization, cutting
edges and thinning which provides more accurate platform for feature extraction methods. We have computed
centre of gravity in two level by considering centre of gravity of all the characters separately instead of taking
one common centre of gravity for entire signature and finally we would be able to built a system for signature
recognition by taking mean values of all the centre of gravity values of various characters present in the
signature. Morphological operations are applied on these signature images with Hough transform to determine
regular shape which assists in authentication process. The values extracted from this Hough space is used in the
feed forward neural network which is trained using back-propagation algorithm. After the different training
stages efficiency found above more than 95%.
KEYWORD:
I.
Static, dynamic, edge detection, cog, back propagation, artificial neural network.
INTRODUCTION
Biometrics is the science and technology of measuring analyzing biological data. In information
technology, Biometrics refers to the technology that measure and analyzes human body characteristics
for authentication purpose [1]. Human recognize each other by their various characteristics for ages.
Biometrics offer automated method of identity verification or identification on the principle of
measurable physiological or behavioral characteristic such as fingerprint or voice etc. Automatic
signature verification is an active research field with many applications. There are two major
categories: static and dynamic signature verification.
II.
DYNAMIC SIGNATURE
In this mode, users write their signature in a digitizing tablet, which acquire the signature in real-time
[6]. Dynamic recognition is also known as on-line recognition. On-line recognition means that the
machine recognizes the handwriting as the user writes. It requires a transducer that captures the
signature as it is written. The on-line system produces time information like acceleration (Speed of
writing), retouching, pressure and pen movement [3].
III.
STATIC SIGNATURE
In this mode, users write their signature on paper, digitize it through an optical scanner or a camera,
and signature can be stored as an image form and the biometric system recognizes the signature
335
IV.
SIGNATURE DATABASE
The signature samples were acquired from the 50 individuals all these signatures were scanned and
stored in the database record so that we can use these signatures for the preprocessing steps, feature
extraction method and matching process.
V.
Step1: Scan the signatures from the paper to create a signature image database.
Step2: Resize the scanned image and is converted into gray scale image, and then it is thinned so that
its important area can be highlighted.
Step3: Morphological operations are performed, it gradually enlarge the boundaries of regions of the
foreground pixels.
Step4: Determine the edges of shapes in an image where edges are in white on black background. Area
of the image is filtered which removes small dots and isolated pixels, as it effects the local features of
the signature. Maximum vertical and horizontal projections are calculated of the skeletonozed image.
Step5: Then Centre of gravity is extracted from each signature in two levels.Level-1 gives the centre of
gravity values of individual connected character present in the equation and Level-2 calculates the final
centre of gravity value obtained by calculating mean of the level-1 output values, as similar images
have central points of gravity which are approximately the same for their similar segments or parts.
Step6: Morphological operations are applied on these signature images with Hough transform to
determine regular shape which assists in authentication process
Step7: The values extracted from this Hough space is used in the feed forward neural network which is
trained using back-propagation algorithm.
VI.
PREPROCESSING
Preprocessing means standardization of images which is important before feature extraction, all
signatures are binarized, thinned. And its size standardization is performed.Step1, 2, 3 of the designed
336
6.1 Binarization
In binarization method we reduce the amount of image information (removing color and
background), so the output image is in black-white. The black-white type of the image is much more
easily to further processing [7].
Q=
(1)
Where: S is the sum values of all images pixels and X, Y is horizontal and vertical size of the
signature image, respectively. Value of the each image pixel is compared to value of Q: if this value is
greater than Q, then appropriate pixel is set to the white colour, otherwise this pixel is set to the black
colour.
6.3. Thinning
Thinning allows us to form a region based shape of the signature here thresholding process is used. It
should be noticed that main features of the object are protected [7]. This eliminates the effect of
different line thicknesses resulting from the use of different writing pens, as the result of thinning
skeletonized signature image of 1-pixel shape is obtained. Pavlidis algorithm is used for obtaining
thinned image [8].
VII.
FEATURE EXTRACTION
The feature selection and extraction play a major role in all pattern recognition systems. It is
preferable to extract those features which will enable the system to correctly discriminate one class
from the other [6]. As true samples and forgeries are very similar in most cases, it is very important to
extract an appropriate feature set to be used in discriminating between genuine and forged signatures.
It is a fact that any forgery must contain some deviations if compared to the original model and that a
genuine sample is rarely identical to the model although there must be similarities between them [8].
Such similarities make it possible to determine the features needed for the verification process.
Features are effective in the verification process as they show a relatively high rate of correctness
since they give more importance to pixel positions and are less sensitive to noise [8]. Feature
extraction is a gathering of characteristic data which provides an output result as a set of the unique
information about the signature.
Process for feature extraction: firstly we took height and length only as the features to be extracted for
matching signature in a data base. 20 signatures of different persons were taken for testing but it gave
error because it is possible that length and height of the signature may vary due to change in speed,
angle of doing signature and therefore selected centre of gravity as an important feature so that the
verification procedure get easily done[9].
337
2 I
L(x, y) = x2 + y2
(2)
This can be calculated using a convolution filter. Since the input image is represented as a set of
discrete pixels, we have to find a discrete convolution kernel that can approximate the second
derivatives in the definition of the Laplacian,shown in figure 2.
255
255
=0 []
=0 []
[] <
^ []
2
2
=0
=0
255
255
=0 []
=0 []
[] <
^ []
2
2
=0
=0
338
(3)
(4)
VIII.
Multi-layer Perceptron (MLP) neural networks are among the most commonly used classifiers for
pattern recognition problems [6]. Despite their advantages, they suffer from some very serious
limitations that make their use, for some problems, impossible. The first limitation is the size of the
neural network. It is very difficult, for very large neural networks, to get trained. As the amount of the
training data increases, this difficulty becomes a serious obstacle for the training process. The second
difficulty is that the geometry, the size of the network, the training method used and the training
parameters depend substantially on the amount of the training data. Also, in order to specify the
structure and the size of the neural network, it is necessary to know a priori the number of the classes
that the neural network will have to deal with. Unfortunately, when talking about a useful SRVS, a
priori knowledge about the number of signatures and the number of the signature owners is not
available. In this work a Backward Propagation neural network is used in order to have the final
decision. The Backward Propagation neural networks are feed-forward architectures with a hidden
non-linear layer and a linear output layer. The training of the system includes the following two steps.
We have trained the network by randomly choosing the signature images from our available database
.We passed the extracted features into the neural network and each time we changed the input weights
339
IX.
9.1 Rejection
The legitimate user is rejected because the system does not find the users current biometric data
similar enough to the master template stored in the database. Correct rejection: The system was asked
if the signature belonged to a false owner and the response was negative. False rejection: The system
was asked if the signature belonged to the correct owner and the response was negative.
.9.2 Acceptance
An imposer is accepted as a legitimate user because the system finds the imposters biometric data
similar enough to master template of a legitimate user. Correct acceptation: The system was asked if
the signature belonged to the correct owner and the response was positive. False acceptation: The
system was asked if the signature belonged to a false owner and the response was positive into groups
and the adoption of a two-stage structure. We showed that such a structure leads to small, easily
trained classifiers without hazarding performance by leaving out features that may be useful to the
system.
X.
RESULT
For the verification process different features are extracted using various methods, and on the basis of
these features matching of the signatures is performed, this paper focuses on center of gravity as an
important feature which provides the more accurate values for the matching process, differences
between the centre of gravity values obtained for first 15 signature sample images is shown in the
table 1.
In order to know the variations among the various signatures; centre of gravity value is taken into the
consideration and the differences of each sample signature with all other samples is calculated, on
observing the table5, it is clear that every signature is having certain variation from other with
reference to the value obtained. The investigations, characteristic features (set of sections, projection,
area, cross points centre of gravity) have been tested separately, and the influence of the each feature
has been observed. The test gave information about changes coefficient FAR (False Accept Rate) and
FRR (False Reject Rate). The FAR typically is stated as the ratio of the number of false acceptances
divided by the number of total identification attempts. The FRR is stated as the ratio of the number of
false rejections divided by the number of total identification attempt. Experimental results prove that
the back propagation network performs well in identifying forgery. This proposed system might
provide an efficient solution to unresolved and very difficult problem in the field of signature. A false
rejection ratio of less than 0.1 and a false acceptance ratio of 0.17 were achieved in this system. In
Different cases used parameters for neural network are written in Table 2. Recognition and
verification result are written in Table 3 and Table 4 respectively.
340
S_2
S_3
S_4
S_5
S_6
S_7
S_8
S_9
S_10
S_11
S_12
S_13
S_14
S_15
40.5
18.2
37.7
35.4
34.0
84.4
43.9
66.2
46.8
49.0
28.2
30.2
50.4
63.8
23.3
45.6
26.1
28.4
38.4
42.3
-9.2
31.5
12.1
14.3
57.8
17.3
39.6
20.1
22.4
73.4
-32.9
-55.2
-1.8
24.1
-35.7
-4.6
-3.0
59.5
19.0
41.3
21.8
24.1
49.7
5.1
17.2
54.2
13.7
36.0
16.6
18.8
-38.0
-6.9
-7.5
17.9
-3.2
6.5
-1.5
-17.1
14.0
-9.6
29.8
15.8
4.5
15.7
-3.6
23.8
-19.1
11.9
-4.4
-5.3
25.5
-39.4
-8.3
0.0
20.6
46.1
24.9
20.6
46.1
24.9
34.7
26.6
11.1
42.1
0.0
25.4
25.4
14.1
11.4
6.0
19.4
-9.6
21.5
0.0
4.3
21.1
-35.0
-3.9
-4.3
21.1
0.0
9.8
1.7
-13.8
17.2
34.7
26.6
11.1
42.1
14.1
11.4
-9.8
0.0
-8.1
-23.6
7.4
-6.0
19.4
-1.7
8.1
0.0
-15.6
15.5
9.6
21.5
35.0
13.8
17.2
23.6
15.6
15.5
0.0
31.0
-31.0
0.0
S_1
40.5
0.0
22.3
S_2
18.2
-22.3
0.0
2.8
19.5
S_3
37.7
-2.8
19.5
0.0
2.3
S_4
35.4
-5.1
17.2
-2.3
0.0
56.3
15.8
38.1
18.6
20.9
S_5
56.3
15.8
38.1
18.6
20.9
0.0
2.1
22.3
S_6
54.2
13.7
36.0
16.6
18.8
34.0
-6.5
15.8
-3.7
-1.4
0.0
20.2
20.2
S_7
-2.1
22.3
S_8
84.4
43.9
66.2
46.8
49.0
28.2
30.2
50.4
S_9
63.8
23.3
45.6
26.1
28.4
38.4
-2.1
20.2
0.7
3.0
9.6
15.8
29.8
S_10
7.5
17.9
S_11
59.5
19.0
41.3
21.8
24.1
3.2
5.3
25.5
S_12
49.7
9.2
31.5
12.1
14.3
-6.5
-4.5
15.7
S_13
57.8
17.3
39.6
35.7
22.4
1.5
3.6
23.8
S_14
73.4
32.9
55.2
35.7
38.0
42.3
1.8
24.1
4.6
6.9
19.1
11.9
39.4
S_15
17.1
14.0
6.5
15.8
3.7
1.4
0.0
4.4
8.3
2.1
20.2
-0.7
3.9
Set 1
0.9
0.001
Set 2
0.85
0.03
Set 3
0.78
0.03
NO. OF HIDDEN
LAYERS
EPOCHS
NONLINEAR
FUNCTION
Training Fuction
4800
logsig
4800
logsig
3500
logsig
Traingdm
Traingdm
Traingdm
Person 1
Reference
output
0.9
1
0.899
Sample output
2
3
0.854
0.865
Person 2
0.9
0.883
0.867
0.875
0.00
Person 3
0.9
0.880
0.758
0.832
0.05
Person 4
0.9
0.890
0.895
0.645
0.045
341
FRR
0.00
-7.4
XI.
Reference
output
Sample output
2
3
FRR
Person 1
0.4
0.283
0.022
0.017
0.00
Person 2
0.4
0.210
0.204
0.108
0.00
Person 3
0.4
0.019
0.384
0.745
0.18
Person 4
0.4
0.008
0.547
0.578
0.16
Centre of gravity is an important parameter or feature used for the matching process of signatures.
The existing techniques computes a single level value for centre of gravity, where as in this paper the
centre of gravity value is more refined and calculated in two levels providing high accurate value to
distinguish signatures easily. As matching process is totally dependent on accurate value of features
considered: further some more refined feature values can be combined with the two level centre of
gravity computation so that verification process gets more accurate and authenticity can be increased.
By using Hough Transform regular shapes determined which overcome problem of this pertinent
shape. A major contribution of this work is the determination of a matrices point from Hough space
transform which is used as an input parameter for the neural network which is further trained with
back propagation algorithm. The experiments performed with a 150 signature database and efficiency
in three different cases is found above 95% approximately.
REFERENCES
[1]. Z.Riha and V.Matyas, Biometric Authentication System, FIMU RS-200-08, pp. 4-44, 2000.
[2]. G.Agam and S.Suresh, Warping based Offline Signature recognition, Information Forensics and
Security, Vol.2, no.3, pp.430-437, 2007.
[3]. F.Z Marcos, signature recognition stateofthe-art, IEEE A&E SYSTEMS MAGZINE JULY 2005
page: 28-32.
[4]. J.wen, B.fang, and T.jhang, offline signature verification: a new rotation invariant approach, pp 35833586.
[5]. B.Jayasekara, and A.Jayasiri, L.Udawatta, An Evolving Signature Recognition System first
International Conference on Industrial and Information Systems, ICIIS 2006, 8 - 11 August 2006, Sri
Lanka,pp529-534
[6]. J.Zheng, and G.Zhu, On-Lin Handwriting Signature Recognition Based on Wavelet Energy Feature
Matching, proceedings of the 6th World Congress on Intelligent Control and Automation, June 21 - 23,
2006, Dalian, China pp9885-9888.
[7]. Dipti verma, pradeep mishra, tapas badal, A Preprocessing And Feature Extraction Method For Static
Signature Recognition Using Hough Transform And Wavelet Analysis ICCET10-international
conference on computer engineering and technology
[8]. T. Pavlidis. A thinning algorithm for discrete binary images. Computer Graphics and Image
Processing,1980, 13: 142-157.
[9]. F.Nouboud and R. Plamondon, Global Parameters and Curves for Off-Line Signature Verification,
Workshop On Frontiers in Handwriting recognition Taiwan, 1994; pp. 145155
[10]. P.Porwik, The compact three stage method of the signature recognition, proceeding on 6 th International
conference on computer Information system and Industrial Management applications, 2007.
[11]. Edson J. R. Justino, F. Bortolozzi, R. Sabourn Offline Signature Verification Using Hmm for Random,
Simple and Skilled Forgeries, Document analysis and recognition. Proceedings of 6 th international
conference 2001.
[12]. Maya V. Karki, K. Indira, Dr. S. Sethu Selvi Offline signature recognition and verification using Neural
Network proceeding of the International Conference on computational Intelligence and Multimedia
Application,2007
[13]. Suhail M. Odeh and Manal Khalil Off-line signature verification and recognition: Neural Network
Approach in International Conference on Innovations in Intelligent Systems and Applications (INISTA)
2011.
342
AUTHORS
Dipti Verma received the BE degree in computer science and engineering from R.S.U,
Raipur,(C.G), India in 2008 and the ME degree in computer technology and application
from CSVTU, Bhilai, (C.G),India in 2010. She is currently working as an assistant
professor in SSITM, Bhilai, India. Her research interests include, pattern recognition, image
processing.
Sipi Dubey received the B.Tech. degree in computer technology from Nagpur university
India in 1996, M.Tech., and Ph.D. degrees in computer technology from, R.S.U,
Raipur,(C.G),India in 2003 and 2010 respectively. From 1996 to 1999, she was a lecturer in
DET Bhilai, India. From 2000 to 2011, she was head of the computer science department
CSIT, Durg, India. In 2011 she joined as a Dean (R&D) RCET Bhilai, India. Her current
research interest include pattern recognition, biometrics, document imaging.
343