0% found this document useful (0 votes)
93 views9 pages

Pattern Recognition

pattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognition

Uploaded by

Aravinda Gowda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
93 views9 pages

Pattern Recognition

pattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognitionpattern recognition

Uploaded by

Aravinda Gowda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

International Journal of Engineering Sciences & Emerging Technologies, Dec. 2013.

ISSN: 2231 6604


Volume 6, Issue 3, pp: 335-343 IJESET

STATIC SIGNATURE RECOGNITION SYSTEM FOR USER


AUTHENTICATION BASED TWO LEVEL COG, HOUGH
TRANSFORM AND NEURAL NETWORK
Dipti Verma1, Sipi Dubey2
1

Department of Computer Science Engineering, Research scholar RCET, Bhilai (C.G) India,
2
Department of Computer Science Engineering, Dean (R&D) RCET, Bhilai, (C.G) India

ABSTRACT
This paper propose signature recognition system based on centre of gravity,hough transform and neural
network for offline signature. Similar to other biometric measures, signatures have inherent variability and so
pose a difficult recognition problem.. In this paper, signature is preprocessed through binarization, cutting
edges and thinning which provides more accurate platform for feature extraction methods. We have computed
centre of gravity in two level by considering centre of gravity of all the characters separately instead of taking
one common centre of gravity for entire signature and finally we would be able to built a system for signature
recognition by taking mean values of all the centre of gravity values of various characters present in the
signature. Morphological operations are applied on these signature images with Hough transform to determine
regular shape which assists in authentication process. The values extracted from this Hough space is used in the
feed forward neural network which is trained using back-propagation algorithm. After the different training
stages efficiency found above more than 95%.

KEYWORD:

I.

Static, dynamic, edge detection, cog, back propagation, artificial neural network.

INTRODUCTION

Biometrics is the science and technology of measuring analyzing biological data. In information
technology, Biometrics refers to the technology that measure and analyzes human body characteristics
for authentication purpose [1]. Human recognize each other by their various characteristics for ages.
Biometrics offer automated method of identity verification or identification on the principle of
measurable physiological or behavioral characteristic such as fingerprint or voice etc. Automatic
signature verification is an active research field with many applications. There are two major
categories: static and dynamic signature verification.

II.

DYNAMIC SIGNATURE

In this mode, users write their signature in a digitizing tablet, which acquire the signature in real-time
[6]. Dynamic recognition is also known as on-line recognition. On-line recognition means that the
machine recognizes the handwriting as the user writes. It requires a transducer that captures the
signature as it is written. The on-line system produces time information like acceleration (Speed of
writing), retouching, pressure and pen movement [3].

III.

STATIC SIGNATURE

In this mode, users write their signature on paper, digitize it through an optical scanner or a camera,
and signature can be stored as an image form and the biometric system recognizes the signature

335

International Journal of Engineering Sciences & Emerging Technologies, Dec. 2013.


ISSN: 2231 6604
Volume 6, Issue 3, pp: 335-343 IJESET
analyzing its shape, this group is also known as off-line. Off-line handwriting recognition, is
performed after the writing is complete. The data are captured at a later time by using an optical
scanner to convert the image into a bit pattern. Off-line signature processing remains important since
it is required in office automation systems. It is used for the validation of cheques, credit cards,
contracts, historical documents, etc. Off-line have total 37 features like Centre of Gravity, Edges,
curves etc. for authentication [1]. Offline signature recognition is an important form of biometric
identification that can be used for various purposes [2]. Signatures are a socially accepted
identification method and are commonly used in bank, credit-card transactions, and various business
functions. Recognizing signatures can be used to verify identity and authenticate documents.
Moreover, given a large collection of business documents relating to legal or intelligence
investigations, signatures can be used to identify documents authored or authorized by specific
individuals. This is an important form of indexing that can be used in the exploration of the data [3].
Offline signature recognition is a challenging task due to normal variability in signatures and the fact
that dynamic information regarding the pen path is not available [4]. Moreover, training data are
normally limited to only a small number of signatures per subject. The application of offline signature
recognition has been studied in the context of biometrics to perform authentication, and, more
recently, in the context of indexing and retrieval of document images in a large database. This,
however, comes at the cost of simplifying the actual signature data. Related to offline signature
recognition is the problem of shape matching. Shape matching is normally treated by determining and
matching key points so as to avoid the problems associated with the detection and parameterization of
curves. It should be noted, however, that the use of feature points [5].The application areas for
signature recognition include all applications where handwritten signatures are already used such as in
bank transactions, credit card operations or authentication of a document with important instructions
or information. The purpose of the signature recognition process is to identify the writer of a given
sample, while the purpose of the signature verification process is to confirm or reject the sample.

IV.

SIGNATURE DATABASE

The signature samples were acquired from the 50 individuals all these signatures were scanned and
stored in the database record so that we can use these signatures for the preprocessing steps, feature
extraction method and matching process.

V.

ALGORITHM DESIGNED FOR THE USED METHODOLOGY

Step1: Scan the signatures from the paper to create a signature image database.
Step2: Resize the scanned image and is converted into gray scale image, and then it is thinned so that
its important area can be highlighted.
Step3: Morphological operations are performed, it gradually enlarge the boundaries of regions of the
foreground pixels.
Step4: Determine the edges of shapes in an image where edges are in white on black background. Area
of the image is filtered which removes small dots and isolated pixels, as it effects the local features of
the signature. Maximum vertical and horizontal projections are calculated of the skeletonozed image.
Step5: Then Centre of gravity is extracted from each signature in two levels.Level-1 gives the centre of
gravity values of individual connected character present in the equation and Level-2 calculates the final
centre of gravity value obtained by calculating mean of the level-1 output values, as similar images
have central points of gravity which are approximately the same for their similar segments or parts.
Step6: Morphological operations are applied on these signature images with Hough transform to
determine regular shape which assists in authentication process
Step7: The values extracted from this Hough space is used in the feed forward neural network which is
trained using back-propagation algorithm.

VI.

PREPROCESSING

Preprocessing means standardization of images which is important before feature extraction, all
signatures are binarized, thinned. And its size standardization is performed.Step1, 2, 3 of the designed

336

International Journal of Engineering Sciences & Emerging Technologies, Dec. 2013.


ISSN: 2231 6604
Volume 6, Issue 3, pp: 335-343 IJESET
algorithm are the preprocessing steps. A wide variety of devices capturing signature causes the need
to normalize an input image of signature so called preprocessing [7]. Sometimes may possible that
people do not always sign documents in exactly the same manner like the angle at which they sign
may be different due to seating position or due to hand placement on the writing surface. For this
reason the original signature should be appropriate formatted and preprocessed. In this paper three
steps are practiced for preprocessing of static signature that is binarization, cutting edges and thinning.

6.1 Binarization
In binarization method we reduce the amount of image information (removing color and
background), so the output image is in black-white. The black-white type of the image is much more
easily to further processing [7].

Q=
(1)

Where: S is the sum values of all images pixels and X, Y is horizontal and vertical size of the
signature image, respectively. Value of the each image pixel is compared to value of Q: if this value is
greater than Q, then appropriate pixel is set to the white colour, otherwise this pixel is set to the black
colour.

6.2 Cutting edges


By cutting edges size of the image is reduced. In this procedure unnecessary signature areas are
removed or we can say that we find the maximum/minimum value of the X and Y coordinates of the
signature and then the image is cut to the signature size. It allows reducing the total number of the
pixels in the analyzed image [7].

6.3. Thinning
Thinning allows us to form a region based shape of the signature here thresholding process is used. It
should be noticed that main features of the object are protected [7]. This eliminates the effect of
different line thicknesses resulting from the use of different writing pens, as the result of thinning
skeletonized signature image of 1-pixel shape is obtained. Pavlidis algorithm is used for obtaining
thinned image [8].

Figure 1: (a) Binarized image

VII.

(b). Preprocessed image

FEATURE EXTRACTION

The feature selection and extraction play a major role in all pattern recognition systems. It is
preferable to extract those features which will enable the system to correctly discriminate one class
from the other [6]. As true samples and forgeries are very similar in most cases, it is very important to
extract an appropriate feature set to be used in discriminating between genuine and forged signatures.
It is a fact that any forgery must contain some deviations if compared to the original model and that a
genuine sample is rarely identical to the model although there must be similarities between them [8].
Such similarities make it possible to determine the features needed for the verification process.
Features are effective in the verification process as they show a relatively high rate of correctness
since they give more importance to pixel positions and are less sensitive to noise [8]. Feature
extraction is a gathering of characteristic data which provides an output result as a set of the unique
information about the signature.
Process for feature extraction: firstly we took height and length only as the features to be extracted for
matching signature in a data base. 20 signatures of different persons were taken for testing but it gave
error because it is possible that length and height of the signature may vary due to change in speed,
angle of doing signature and therefore selected centre of gravity as an important feature so that the
verification procedure get easily done[9].

337

International Journal of Engineering Sciences & Emerging Technologies, Dec. 2013.


ISSN: 2231 6604
Volume 6, Issue 3, pp: 335-343 IJESET
7.1 Image area
The number of black (foreground) pixels in the image. In skeletonized signature images, it represents
a measure of the density of the signature traces.

7.2 Maximum vertical projection


The vertical projection of the skeletonized signature image is calculated. The highest value of the
projection histogram is taken as the maximum vertical projection.

7.3 Maximum horizontal projection


As above, the horizontal projection histogram is calculated and the highest value of it is considered as
the maximum horizontal projection.

7.4 Edge detection- Laplacian of Gaussian


The Laplacian is a 2-D isotropic measure of the 2nd spatial derivative of an image. The Laplacian of
an image highlights regions of rapid intensity change and is therefore often used for edge detection.
The Laplacian is often applied to an image that has first been smoothed with something
approximating a Gaussian smoothing filter in order to reduce its sensitivity to noise, and hence the
two variants will be described together here[11]. The operator normally takes a single gray level
image as input and produces another gray level image as output. The Laplacian L(x,y) of an image
with pixel intensity values I(x,y) is given by:
2 I

2 I

L(x, y) = x2 + y2

(2)

This can be calculated using a convolution filter. Since the input image is represented as a set of
discrete pixels, we have to find a discrete convolution kernel that can approximate the second
derivatives in the definition of the Laplacian,shown in figure 2.

Figure 2: Image obtained by edge detection

7.5 Centre of Gravity


In this step centre of gravity is calculated-it is a point G (xg, yg) where appropriate lines A and B are
crossing. These lines divide the signature image into vertical and horizontal regions where number of
pixels in those regions is the same [10]. The coordinates (xg, yg) are obtained based on analysis of the
vertical and horizontal projection arrays Nvert and Nhori, respectively. The value of the coordinate xg
is equal to such index kx of the cell of the Nvert array, for which the next condition is fulfilled [12],
here centre of gravity(cog) is calculated in two levels, in the first level we calculate the centre of
gravity of each character present in the signature, figure.3 shows the level-1 image which contains red
marks on each connected characters specifies the centre of gravity values and in the second level
mean is calculated from the output of level-1[14].
1

255
255
=0 []
=0 []
[] <
^ []
2
2
=0

=0

255
255
=0 []
=0 []
[] <
^ []
2
2
=0

=0

338

(3)

(4)

International Journal of Engineering Sciences & Emerging Technologies, Dec. 2013.


ISSN: 2231 6604
Volume 6, Issue 3, pp: 335-343 IJESET

Figure 3: Centre of gravity level-1 image

7.6 Number of cross points


Cross point is a signature point that has at least three 8-neighbors.

7.7 The Hough Transform


In the last stage the Hough Transform (HT) is used [5]. This algorithm searches a set of straightlines,
which appears in the analyzed signature shown in figure 4. The classical transformation identifies
straight-lines in the signature image, but it has also been used to identifying of signature shapes. In the
first step the HT is applied, where appropriate curve-lines are found. The analyzed signature consists
of large number of straight-lines, which were found by the HT. The Hough transform is a feature
extraction technique used in image analysis, computer vision, and digital image processing. Hough
transform is the linear transform for detecting straight lines. For computational reasons, it is therefore
better to parameterize the lines in the Hough transform with two other parameters, commonly referred
to as r and (theta). Using this parameterization, the equation of the line can be written as
Y=xcos()+ysin()
(5)

Figure 4. Hough transform image

VIII.

NEURAL NETWORK TRAINING

Multi-layer Perceptron (MLP) neural networks are among the most commonly used classifiers for
pattern recognition problems [6]. Despite their advantages, they suffer from some very serious
limitations that make their use, for some problems, impossible. The first limitation is the size of the
neural network. It is very difficult, for very large neural networks, to get trained. As the amount of the
training data increases, this difficulty becomes a serious obstacle for the training process. The second
difficulty is that the geometry, the size of the network, the training method used and the training
parameters depend substantially on the amount of the training data. Also, in order to specify the
structure and the size of the neural network, it is necessary to know a priori the number of the classes
that the neural network will have to deal with. Unfortunately, when talking about a useful SRVS, a
priori knowledge about the number of signatures and the number of the signature owners is not
available. In this work a Backward Propagation neural network is used in order to have the final
decision. The Backward Propagation neural networks are feed-forward architectures with a hidden
non-linear layer and a linear output layer. The training of the system includes the following two steps.
We have trained the network by randomly choosing the signature images from our available database
.We passed the extracted features into the neural network and each time we changed the input weights

339

International Journal of Engineering Sciences & Emerging Technologies, Dec. 2013.


ISSN: 2231 6604
Volume 6, Issue 3, pp: 335-343 IJESET
to train the network. The extracted values of each signature images from the database of 150 images
are given to the feed forward neural network (trained using back propagation gradient descent
learning). Inferences are drawn by three cases:
Case1 .In this case data sets used for the training and testing are same.
Training Data set = 150 Images
Testing Data Set = 150 images
Case 2 .In this case data set which is used for the training is greater than testing data set.
Training Data set = 150 Images
Testing Data Set = 120 images
Case 3 .In this case data set which is used for the training is less than testing data set.
Training Data set = 100 Images
Testing Data Set = 120 images
In this model we use feed forward neural network with one single layer, two hidden layers and an one
output layer. We use 35 neuron in the first hidden layer and 25 hidden layer in the second layer.

IX.

SIGNATURE VERIFICATION AND IDENTIFICATION

It is done on the basis of FAR and FRR.

9.1 Rejection
The legitimate user is rejected because the system does not find the users current biometric data
similar enough to the master template stored in the database. Correct rejection: The system was asked
if the signature belonged to a false owner and the response was negative. False rejection: The system
was asked if the signature belonged to the correct owner and the response was negative.
.9.2 Acceptance
An imposer is accepted as a legitimate user because the system finds the imposters biometric data
similar enough to master template of a legitimate user. Correct acceptation: The system was asked if
the signature belonged to the correct owner and the response was positive. False acceptation: The
system was asked if the signature belonged to a false owner and the response was positive into groups
and the adoption of a two-stage structure. We showed that such a structure leads to small, easily
trained classifiers without hazarding performance by leaving out features that may be useful to the
system.

X.

RESULT

For the verification process different features are extracted using various methods, and on the basis of
these features matching of the signatures is performed, this paper focuses on center of gravity as an
important feature which provides the more accurate values for the matching process, differences
between the centre of gravity values obtained for first 15 signature sample images is shown in the
table 1.
In order to know the variations among the various signatures; centre of gravity value is taken into the
consideration and the differences of each sample signature with all other samples is calculated, on
observing the table5, it is clear that every signature is having certain variation from other with
reference to the value obtained. The investigations, characteristic features (set of sections, projection,
area, cross points centre of gravity) have been tested separately, and the influence of the each feature
has been observed. The test gave information about changes coefficient FAR (False Accept Rate) and
FRR (False Reject Rate). The FAR typically is stated as the ratio of the number of false acceptances
divided by the number of total identification attempts. The FRR is stated as the ratio of the number of
false rejections divided by the number of total identification attempt. Experimental results prove that
the back propagation network performs well in identifying forgery. This proposed system might
provide an efficient solution to unresolved and very difficult problem in the field of signature. A false
rejection ratio of less than 0.1 and a false acceptance ratio of 0.17 were achieved in this system. In
Different cases used parameters for neural network are written in Table 2. Recognition and
verification result are written in Table 3 and Table 4 respectively.

340

International Journal of Engineering Sciences & Emerging Technologies, Dec. 2013.


ISSN: 2231 6604
Volume 6, Issue 3, pp: 335-343 IJESET
Table 1.Differences between the values of center of gravity for each signature
S_1

S_2

S_3

S_4

S_5

S_6

S_7

S_8

S_9

S_10

S_11

S_12

S_13

S_14

S_15

40.5

18.2

37.7

35.4

34.0

84.4
43.9
66.2
46.8
49.0
28.2
30.2
50.4

63.8
23.3
45.6
26.1
28.4

38.4

42.3

-9.2
31.5
12.1
14.3

57.8
17.3
39.6
20.1
22.4

73.4
-32.9
-55.2

-1.8
24.1

-35.7

-4.6

-3.0

59.5
19.0
41.3
21.8
24.1

49.7

5.1
17.2

54.2
13.7
36.0
16.6
18.8

-38.0

-6.9

-7.5

17.9

-3.2

6.5

-1.5

-17.1

14.0

-9.6
29.8

15.8

4.5
15.7

-3.6
23.8

-19.1

11.9

-4.4

-5.3
25.5

-39.4

-8.3

0.0
20.6
46.1
24.9

20.6

46.1

24.9

34.7

26.6

11.1

42.1

0.0
25.4

25.4

14.1
11.4

6.0
19.4

-9.6

21.5

0.0

4.3
21.1

-35.0

-3.9

-4.3

21.1

0.0

9.8

1.7

-13.8

17.2

34.7
26.6
11.1
42.1

14.1

11.4

-9.8

0.0

-8.1

-23.6

7.4

-6.0

19.4

-1.7

8.1

0.0

-15.6

15.5

9.6
21.5

35.0

13.8
17.2

23.6

15.6
15.5

0.0

31.0

-31.0

0.0

S_1

40.5

0.0

22.3

S_2

18.2

-22.3

0.0

2.8
19.5

S_3

37.7

-2.8

19.5

0.0

2.3

S_4

35.4

-5.1

17.2

-2.3

0.0

56.3
15.8
38.1
18.6
20.9

S_5

56.3

15.8

38.1

18.6

20.9

0.0

2.1

22.3

S_6

54.2

13.7

36.0

16.6

18.8

34.0

-6.5

15.8

-3.7

-1.4

0.0
20.2

20.2

S_7

-2.1
22.3

S_8

84.4

43.9

66.2

46.8

49.0

28.2

30.2

50.4

S_9

63.8

23.3

45.6

26.1

28.4

38.4

-2.1

20.2

0.7

3.0

9.6
15.8

29.8

S_10

7.5
17.9

S_11

59.5

19.0

41.3

21.8

24.1

3.2

5.3

25.5

S_12

49.7

9.2

31.5

12.1

14.3

-6.5

-4.5

15.7

S_13

57.8

17.3

39.6

35.7

22.4

1.5

3.6

23.8

S_14

73.4

32.9

55.2

35.7

38.0

42.3

1.8

24.1

4.6

6.9

19.1
11.9

39.4

S_15

17.1
14.0

6.5
15.8
3.7
1.4

0.0

4.4

8.3

2.1
20.2
-0.7

3.9

Table 2. Parameters used for training in Neural Network


Parameter
MOMEMTUM
LEARNING RATE

Set 1
0.9
0.001

Set 2
0.85
0.03

Set 3
0.78
0.03

NO. OF HIDDEN
LAYERS
EPOCHS
NONLINEAR
FUNCTION
Training Fuction

4800
logsig

4800
logsig

3500
logsig

Traingdm

Traingdm

Traingdm

Table 3 . Recognition result for sample user

Person 1

Reference
output
0.9

1
0.899

Sample output
2
3
0.854
0.865

Person 2

0.9

0.883

0.867

0.875

0.00

Person 3

0.9

0.880

0.758

0.832

0.05

Person 4

0.9

0.890

0.895

0.645

0.045

341

FRR
0.00

-7.4

International Journal of Engineering Sciences & Emerging Technologies, Dec. 2013.


ISSN: 2231 6604
Volume 6, Issue 3, pp: 335-343 IJESET
Table 4 . Verification result for sample user

XI.

Reference
output

Sample output
2
3

FRR

Person 1

0.4

0.283

0.022

0.017

0.00

Person 2

0.4

0.210

0.204

0.108

0.00

Person 3

0.4

0.019

0.384

0.745

0.18

Person 4

0.4

0.008

0.547

0.578

0.16

CONCLUSION AND FUTURE WORK

Centre of gravity is an important parameter or feature used for the matching process of signatures.
The existing techniques computes a single level value for centre of gravity, where as in this paper the
centre of gravity value is more refined and calculated in two levels providing high accurate value to
distinguish signatures easily. As matching process is totally dependent on accurate value of features
considered: further some more refined feature values can be combined with the two level centre of
gravity computation so that verification process gets more accurate and authenticity can be increased.
By using Hough Transform regular shapes determined which overcome problem of this pertinent
shape. A major contribution of this work is the determination of a matrices point from Hough space
transform which is used as an input parameter for the neural network which is further trained with
back propagation algorithm. The experiments performed with a 150 signature database and efficiency
in three different cases is found above 95% approximately.

REFERENCES
[1]. Z.Riha and V.Matyas, Biometric Authentication System, FIMU RS-200-08, pp. 4-44, 2000.
[2]. G.Agam and S.Suresh, Warping based Offline Signature recognition, Information Forensics and
Security, Vol.2, no.3, pp.430-437, 2007.
[3]. F.Z Marcos, signature recognition stateofthe-art, IEEE A&E SYSTEMS MAGZINE JULY 2005
page: 28-32.
[4]. J.wen, B.fang, and T.jhang, offline signature verification: a new rotation invariant approach, pp 35833586.
[5]. B.Jayasekara, and A.Jayasiri, L.Udawatta, An Evolving Signature Recognition System first
International Conference on Industrial and Information Systems, ICIIS 2006, 8 - 11 August 2006, Sri
Lanka,pp529-534
[6]. J.Zheng, and G.Zhu, On-Lin Handwriting Signature Recognition Based on Wavelet Energy Feature
Matching, proceedings of the 6th World Congress on Intelligent Control and Automation, June 21 - 23,
2006, Dalian, China pp9885-9888.
[7]. Dipti verma, pradeep mishra, tapas badal, A Preprocessing And Feature Extraction Method For Static
Signature Recognition Using Hough Transform And Wavelet Analysis ICCET10-international
conference on computer engineering and technology
[8]. T. Pavlidis. A thinning algorithm for discrete binary images. Computer Graphics and Image
Processing,1980, 13: 142-157.
[9]. F.Nouboud and R. Plamondon, Global Parameters and Curves for Off-Line Signature Verification,
Workshop On Frontiers in Handwriting recognition Taiwan, 1994; pp. 145155
[10]. P.Porwik, The compact three stage method of the signature recognition, proceeding on 6 th International
conference on computer Information system and Industrial Management applications, 2007.
[11]. Edson J. R. Justino, F. Bortolozzi, R. Sabourn Offline Signature Verification Using Hmm for Random,
Simple and Skilled Forgeries, Document analysis and recognition. Proceedings of 6 th international
conference 2001.
[12]. Maya V. Karki, K. Indira, Dr. S. Sethu Selvi Offline signature recognition and verification using Neural
Network proceeding of the International Conference on computational Intelligence and Multimedia
Application,2007
[13]. Suhail M. Odeh and Manal Khalil Off-line signature verification and recognition: Neural Network
Approach in International Conference on Innovations in Intelligent Systems and Applications (INISTA)
2011.

342

International Journal of Engineering Sciences & Emerging Technologies, Dec. 2013.


ISSN: 2231 6604
Volume 6, Issue 3, pp: 335-343 IJESET
[14]. Dipti verma, Dr. Sipi Dubey,Two level centre of gravity computation An important parameter for
offline signature recognition, IJCA (0975 8887) Volume 54 No.10, September 2012.

AUTHORS
Dipti Verma received the BE degree in computer science and engineering from R.S.U,
Raipur,(C.G), India in 2008 and the ME degree in computer technology and application
from CSVTU, Bhilai, (C.G),India in 2010. She is currently working as an assistant
professor in SSITM, Bhilai, India. Her research interests include, pattern recognition, image
processing.
Sipi Dubey received the B.Tech. degree in computer technology from Nagpur university
India in 1996, M.Tech., and Ph.D. degrees in computer technology from, R.S.U,
Raipur,(C.G),India in 2003 and 2010 respectively. From 1996 to 1999, she was a lecturer in
DET Bhilai, India. From 2000 to 2011, she was head of the computer science department
CSIT, Durg, India. In 2011 she joined as a Dean (R&D) RCET Bhilai, India. Her current
research interest include pattern recognition, biometrics, document imaging.

343

You might also like