Seminarp
Seminarp
First and foremost, I would like to thank my seminar guide, Prof. Jyoti Mankar,
for her guidance and support. I will forever remain grateful for the constant support
and guidance extended by my guide, in making this seminar report. Through our
many dis-cussions, she helped me to form and solidify ideas.
With a deep sense of gratitude, I wish to express my sincere thanks to, Prof.
Dr. S. S. Sane for his immense help in planning and executing the works in time. My
grateful thanks to the departmental sta members for their support.
I would also like to thank my wonderful colleagues and friends for listening my ideas,
asking questions and providing feedback and suggestions for improving my ideas.
Monetary transactions are integral part of our day to day activities, so currency
recognition has become one of the active research area at present and it has
vast potential applications. The research has introduced a system to recognize
and classify four different currencies using computer vision.
The technology of currency recognition aims to search and extract the visible
as well as hidden marks on paper currency for efficient classification. The
features are extracted based on color, texture and shape for four different
currencies and they are classified using Artificial Neural network.
1 Introduction 1
2 Literature Survey 2
2.1 Canny Edge Detection Method . . . . . . . . . . . . . . . . . . . . . . . 2
2.2 Feedforward Back propagation Procedure . . . . . . . . . . . . . . . . . . 3
4 Proposed Approach 8
4.1 Image Aquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4.2 Image Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4.2.1 Median Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4.3 Feature Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4.3.1 LUV Color Space . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.4 Shape Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.4.1 Edge Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.4.2 Gray level co-occurrence matrix . . . . . . . . . . . . . . . . . . . 12
Introduction
Literature Survey
neural networks.
The feedforward neural network was the rst and simplest type of arti cial
neural network devised. In this network, the information moves in only one direction,
forward, from the input nodes, through the hidden nodes (if any) and to the output
nodes. There are no cycles or loops in the network.
In the proposed system a high resolution scanner is used to acquire the image.
The acquired image of a paper currency is rst converted to gray scaled image. Conversion
5
CHAPTER 3. STRUCTURE OF TYPICAL CURRENCY RECOGNITION SYSTEM
Pound
Yen
6
CHAPTER 3. STRUCTURE OF TYPICAL CURRENCY RECOGNITION SYSTEM
Dollar
Rupee
The goal of the paper is to achieve the best accuracy in recognizing patterns
with the lowest possible. Given the fact that paper currencies are usually recognized by
machines that have small power (such as auto-seller machines and ATMs), the cost is a
limiting factor. Therefore, it is really urgent for all paper currency recognizers to minimize
the power consumption, and, at the same time, achieve high level of accuracy.
7
Chapter 4
Proposed Approach
Euler number of an image is a scalar value which represents the number of objects
in the image minus the total number of holes in those objects
X
e = Euler = O Ho
where O stands for any object in the image, and Ho stands for any hole in that
object. The mean color intensity cm, then color variance cv, nally color skewness cs
are com-puted for each channel and is stored in input feature vector.
9
CHAPTER 4. PROPOSED APPROACH
After creating a GLCM, some statistical measures are extracted from the matrix
as:
1.Contrast : Returns a measure of the intensity contrast between a pixel and its
neighbor over the whole image.
2
Range = [0 (size(GLCM,1)-1) ]
Contrast is 0 for a constant image. The property Contrast is also known as
variance and inertia.
2.Correlation : Returns a measure of how correlated a pixel is to its neighbor over
the whole image.
Range = [-1 1]
Correlation is 1 or -1 for a perfectly positively or negatively correlated image.
Corre-lation is NaN for a constant image
3.Energy : Returns the sum of squared elements in the GLCM.
Range = [0 1]
Energy is 1 for a constant image.
The property Energy is also known as uniformity, uniformity of energy, and
angular second moment.
4.Homogeneity : Returns a value that measures the closeness of the distribution
of elements in the GLCM to the GLCM diagonal.
Range = [0 1]
Homogeneity is 1 for a diagonal GLCM.
12
Chapter 5
Once the features are extracted it is essential to recognize the currency based on
these features by using an e ective currency recognition system called classi er. One of
the most common classi er which had been used recently is Arti cial neural network.
Epoch :
Presentation of the set of training (input and/or target) vectors to a network and
the calculation of new weights and biases.Training vectors can be presented
one at a time or all together in a batch.
The recognition results are as shown in Table 2. Average recognition rate was
seen as 93.84 which is quite reasonable and acceptable in various cases.
[2] Ayush Arora. Currency recognition system using canny edge detection.Journal
of technological advances and scienti c research(JTASR).
[3] Hassanpour H, Farahabadi, PM. Using Hidden Markov Models for paper
currency recognition, Expert Systems with Applications 2009; 36(6):
[4] Gonzalez RC, Woods RE. Digital image processing, Prentice Hall; 2009
[5] Hassanpour H, Mesbah M. Neonatal EEG seizure detection using spike signatures
in the time-frequency domain, IJE Transactions A: Basics 2007; 20(2):137-146.
[7] Iosifescu M. Finite Markov process and their applications. New York, NY: Wiley;
1980.
[8] Jae LS. Two-dimensional signal and image processing. Englewood Cli s, NJ:
Pren-tice Hall; 1990.
18
REFERENCES
[10] Zhang EH, Jiang B, Duan JH, Bian ZZ. Research on paper currency recognition
by neural networks. In Proceeding of the second international conference on
machine learning and cybernetics, p. 2193 2197; 200.
19