Counterfeit Detection Final
Counterfeit Detection Final
Submitted To:
In partial fulfillment of the requirements of Final Year, 7th Semester for the
Bachelor’s Degree in Computer Science and Information Technology
Submitted By:
Declaration Sheet
(Presented in partial fulfillment of the assessment requirements for the above award)
This work or any part thereof has not previously been presented in any form to the
University or to any other institutional body whether for assessment or for other purposes.
We confirm that the intellectual content of the work is the result of our own efforts and of
no other person.
It is acknowledged that the author of any project work shall own the copyright. However,
by submitting such copyright work for assessment, the author grants the University a
perpetual royalty-free license to do all or any of those things referred to in section 16(I) of
the Copyright Designs and Patents Act 1988. (viz. to copy work; to issue copies to the
public; to perform or show or play the work in public; to broadcast the work or to make an
adaptation of the work).
Declared By:
ii
National College of Computer Studies
Tribhuvan University
Supervisor Recommendation
I hereby recommend that this project, prepared under my supervision entitled “Counterfeit
Money Detection using SVM classification with CNN for matching percentage
evaluation”, a platform that detects counterfeit money from currency notes and provides
results, in partial fulfillment of the requirements for the degree of B.Sc. in Computer
Science and Information Technology is processed for the evaluation.
.....................................
Project Supervisor
Paknajol, Kathmandu
iii
Certificate of Approval
This is to certify that the project report entitled “Counterfeit Money Detection using SVM
classification with CNN for Matching Percentage Evaluation” is a Bonafede report of
the work carried out by Ms. Ramila Kumari Shahi, Ms. Sanisha Maharjan and Ms.
Shristi Bajracharya under the guidance and supervision for the degree of B.Sc. in
Computer Science and Information Technology at National College of Computer Studies,
Tribhuvan University.
To the best of my knowledge and belief, this work embodies the work of candidates, has
duly been completed, fulfills the requirement of the ordinance relating to the Bachelor
degree of the university and is up to the standard in respect of content, presentation, and
language for being referred to the examiner.
...................................... ......................................
Mr. Yuba Raj Devkota Mr. Rajan Paudel
Project Supervisor Program Coordinator
National College of Computer Studies National College of Computer Studies
...................................... ......................................
External Examiner Internal Examiner
Tribhuvan University National College of Computer Studies
iv
Acknowledgement
It gives us immense pleasure to express our sincere gratitude with the happiest appreciation
to all those respectable personalities who helped us to make our project work successive as
well as productive.
Our sincere appreciation goes to our mentors, teachers, and guides, without whose
unwavering support and assistance this project would not have come to fruition. We are
deeply thankful for the continuous help and support provided by the college throughout the
development of the project.
Special thanks are owed to Mr. Yuba Raj Devkota for his invaluable supervision and
assistance. His guidance and support were crucial during challenging moments, and
without him, this project would not have been possible. We are grateful for his consistent
error corrections, valuable suggestions, and guidance that proved to be instrumental
throughout the project development. Additionally, we extend our thanks to all the teachers
who provided us with the opportunity to undertake this project. Their encouragement and
support were integral to the project's growth and success.
v
Abstract
vi
Table of Contents
vii
5.1 Implementation......................................................................................................... 31
5.1.1 Tools used. ......................................................................................................... 31
5.1.2 Implementation Details of Algorithm: ............................................................ 36
5.2 Testing ...................................................................................................................... 42
5.2.1 Test Cases for Unit Testing ............................................................................... 42
5.2.2 Test Cases for System Testing........................................................................... 45
5.3 Result Analysis ......................................................................................................... 47
Chapter 6: Conclusion and Future Recommendations ................................................ 56
6.1 Conclusion ................................................................................................................ 56
6.2 Future Recommendation .......................................................................................... 56
References ......................................................................................................................... 57
Appendix ........................................................................................................................... 59
viii
List of Abbreviations
ix
List of Figures
x
Figure 5. 19 CNN (Watermark): Accuracy for Validation Data Table ............................. 53
Figure 5. 20 CNN (Security Thread): Accuracy for Training Data Table ......................... 53
Figure 5. 21 CNN (Security Thread): Accuracy for Validation Data Table ...................... 53
Figure 5. 22 SVM: Confusion Matrix ................................................................................ 54
Figure 5. 23 CNN (Watermark): Confusion Matrix .......................................................... 55
Figure 5. 24 CNN (Security Thread): Confusion Matrix ................................................... 55
xi
List of Tables
Table 3. 1 Schedule Plan .................................................................................................... 15
xii
Chapter 1: Introduction
1.1 Introduction
Counterfeit Money or Fake money is any currency (paper note or coin) produced outside
the legal sanction of a state or government to imitate the currency to deceive its recipient.
The business of counterfeiting money is as old as the history of money itself. Today, the
finest counterfeit banknotes are called super dollar/super currency because of high quality
and imitation of real currency.
Out of many security features of the bank note such as watermark, see- through register,
security thread, emboss, visually impaired feature, our fake money detection system will
use watermark feature and security thread as its main tool to identify the fake currency. By
employing image processing techniques, the system focuses on detecting the watermark
and security thread in genuine currency notes to distinguish them from fake ones. This
approach aims to offer a dependable and effective solution for identifying counterfeit
banknotes, emphasizing the importance of watermark and security thread analysis in the
detection process.
In Nepal, counterfeit money fraud is on rise. The past three-month data (from Nepal News)
shows that over 1 million worth counterfeit notes were confiscated, many more are still
hidden and unidentified. The Bank and financial institutions, being a mediator to circulate
the money around the country, is the main targeted area for the people possessing the fake
currency. So, development of such a system and implementation in banks and financial
institutions will decrease the flow of the fake currency in the market. This system will help
to identify the fake currency beforehand and block the flow of the fake notes.
1.3 Objectives
1
1.4 Scope and Limitation
This section of our project explains the scope of our project, how we plan to deliver it and
make it effective. It is a system with an added functionality of currency detection.
1.4.2 Limitation
• Banknotes that show signs of tear or aging can occasionally be mistakenly identified
as counterfeit.
In the project, Waterfall method has been used as the development model. This project is a
structured and systematic approach, making it suitable when requirements are well-defined,
and changes are expected to be minimal. Each phase must be completed before moving on
to the next, ensuring a clear progression in the development process. modify this paragraph.
1. Requirement Analysis:
In this phase, the requirements for counterfeit money detection are thoroughly
defined. This includes specifying the types of currency to be detected and analyzing
the security features that need to be identified.
2. System Design:
Building upon the gathered requirements, the system design phase entails creating
a comprehensive blueprint or architecture for the counterfeit money detection
2
system. This encompasses defining modules, data structures, interfaces, and
algorithms crucial for accurate detection.
4. Testing Phase:
The testing phase is dedicated to ensuring that the individual components or
modules function correctly when integrated into the entire counterfeit money
detection system. Various testing methods, including unit testing and integration
testing, are employed to validate the system's functionality.
7. Maintenance Phase:
The maintenance phase involves providing ongoing support and updates to the
counterfeit money detection system. Any identified issues or bugs are addressed
promptly, and updates or enhancements may be implemented based on user
feedback or evolving counterfeit threats.
3
Figure 1. 1 Waterfall Model
Since the project requirements are well-defined, stable and not expected to undergo
significant changes. The Waterfall Model was selected for its ability to provide a controlled
and predictable development process, catering to the specific characteristics and
requirements of the counterfeit money detection project.
The report on ‘Counterfeit Money Detection’ is structured into 6 chapter, which include:
Chapter 1: Introduction
It includes the study of fundamental theories, general concept and literature review i.e.,
existing system, people’s preference etc. of the present Counterfeit money detection.
4
Chapter 3: System Analysis
This chapter includes requirement analysis and feasibility of the system. It also consists of
ER-diagram and DFD-diagram.
It consists of the overall design of the system and the algorithm used.
It provides the detail tool we used while developing the system. It also includes required
testing with different test as per requirement.
It summarizes the key findings, insights, and outcomes derived from the research or project
and future suggestions for potential actions, improvements, or areas of further exploration
based on the conclusions drawn.
5
Chapter 2: Background Study and Literature Review
6
classification Model using Logistic regression shows better accuracy of 99identifying most
feasible technique to be implemented based on the accuracy rate REVIEW ON
DETECTION OF
7
AUTOMATIC RECOGNITION OF FAKE INDIAN CURRENCY NOTE (Sonali R.
Darade, Prof. G.R. Gadveer) [5]: In this paper, the automatic system is designed for
identification of Indian currency notes and check whether it is fake or original. The
automatic system is very useful in banking systems and other field also. In India increase
in the counterfeit currency notes of 100, 500 and 1000 rupees. As increase in the technology
like scanning, color printing and duplicating, because of that there is increase in counterfeit
problem. In this paper, recognition of fake Indian currency notes is done by using image
processing technique. In this paper, recognition of fake Indian currency notes is done by
using image processing technique. In this technique first the image acquisition is done and
applies preprocessing to the image. In pre-processing crop, smooth and adjust then convert
the image into grey color after conversion apply the image segmentation then extract
features and reduce, finally comparing image.
8
with the advancement in technology, the amount of crime carried out due to wrong use of
these technologies is also increasing on a large scale. Similar thing 7applies to the currency
notes being handled by us on day to day basis. Fake currency notes are made so accurate
that finding out which one is real note and which one is fake becoming increasingly
difficult. Fake currency note is the imitation of the authentic currency note for wrong
purposes and without the permission of the state and central government. Hence with the
advancement in the development of fake currency notes, the detection mechanism needs to
be developed as well to reduce its flow in the market. Fake currency notes have led to
reduction in the value of money and also loss on economic and social front. In the paper
we are using Image Processing and Machine Learning to check the authenticity of the
currency note. An android application will help people to easily detect the fake note as
many people today carry smart phones and hence android application is not a difficult thing
for people to handle. This led us to satisfy our purpose of our application being helpful to
common citizen of India. Key words Fake Currency Detection, Counterfeit Detection
REAL TIME FAKE CURRENCY NOTE DETECTION USING DEEP LEARNING. Int.
J. Eng.Adv. Technol. (IJEAT) 9(1S5) (M. Laavanya, V. Vijayaraghavan). ISSN: 2249-
8958 [9]: Great technological advancement in printing and scanning industry made
9
counterfeiting problem to grow more vigorously. As a result, counterfeit currency affects
the economy and reduces the value of original money. Thus it is most needed to detect the
fake currency. Most of the former methods are based on hardware and image processing
techniques. Finding counterfeit currencies with the methods is less efficient and time
consuming. To overcome the above problem, we have proposed the detection of counterfeit
currency using a deep convolution neural network. Our work identifies the fake currency
by examining the currency images. The transfer learned convolutional neural network is
trained with two thousand, five hundred, two hundred and fifty Indian currency note
datasets to learn the feature map of the currencies. Once the feature map is learnt the
network is ready for identifying the fake currency in real time. The proposed approach
efficiently identifies the forgery currencies of 2000, 500, 200, and 50 with less time
consumption.
10
Chapter 3: System Analysis
The system analysis of the counterfeit money detection system involves defining
objectives, gathering requirements, selecting appropriate technologies, implementing
feature extraction techniques, training CNN models, and designing a user-friendly interface
to accurately detect counterfeit currency notes and enhance financial security measures.[11]
The requirement analysis for the counterfeit money detection system encompasses
both functional and non-functional aspects essential for accurate and efficient
detection of counterfeit currency.
i. Functional Requirement
Use Case Diagram:
Use Cases:
a. Login:
• Actor: User
11
• Description: The user logs into the system using valid credentials (username
and password).
b. Registration:
• Actor: User
• Description: New users can register for the system by providing necessary
information and creating a username and password.
c. Upload:
• Actor: User
• Description: Users can upload image of currency note for analysis. The
uploaded image is then processed by the system.
d. Counterfeit Check:
• Actor: System
• Description: The system analyzes the uploaded image to determine if the
currency notes are genuine or counterfeit. This involves using image
processing and counterfeit detection algorithms.
e. Display Result:
• Actor: System
• Description: After the counterfeit check is performed, the system displays
the results to the user. It indicates whether the uploaded currency note is
genuine or potentially counterfeit. The system also displays the matching
percentage of watermark and security thread of the uploaded currency note.
• Performance:
The system should be able to process currency images quickly and accurately
to detect counterfeit notes efficiently.
12
It should have a high level of accuracy in distinguishing between genuine and
fake currency.
• Reliability:
The system should be reliable, ensuring consistent and accurate detection
results.
It should be robust enough to handle variations in currency notes and
environmental conditions.
• Usability:
The system should have a user-friendly interface that is easy to navigate for
users checking the authenticity of currency notes.
It should provide clear and understandable results to users.
• Security:
The system should maintain the confidentiality and integrity of the data
processed, especially when dealing with sensitive financial information.
It should have measures in place to prevent unauthorized access to the system.
• Scalability:
The system should be scalable to accommodate an increasing number of users
and currency checks without compromising performance.
It should be able to handle a growing dataset of currency images for training
and testing.
• Maintainability:
The system should be easy to maintain and update with new counterfeit
detection techniques or features.
It should have clear documentation and modular design for ease of
maintenance.
These non-functional requirements are essential for ensuring the overall effectiveness,
usability, and reliability of a counterfeit money detection system.
13
3.1.2 Feasibility Analysis
i Technical Feasibility
The technical feasibility of the project is high, as the required hardware and
software resources are widely available and accessible. Requirements of our system
can be categorized as:
Hardware Requirements:
• A computer with a minimum of 4 GB of RAM and a multi-core processor
• Sufficient RAM for storing and processing image data during the
counterfeit detection process.
• Storage devices for maintaining a database of known currency features and
patterns for comparison during counterfeit detection.
Software Requirements:
• Operating System: Windows 10, Linux, or MacOS
• Web Browser: Chrome, Firefox, or Safari
• Python 3.6 or above: This is required to run the Flask web framework.
• Flask web framework: This is required to build the web application and API for
the system.
• ReactJS: This is required to build the frontend of the web application and to
create a responsive user interface.
• Git and GitHub: These are required for version control and collaboration among
the development team.
In addition to the above requirements, some optional software may be used for
development such as virtual environments like Anaconda to manage dependencies.
ii Operational Feasibility
The operational feasibility of the project is moderate, as the project requires the
development of a reliable and accurate counterfeit money classifier. The operational
feasibility of implementing a counterfeit money detection system hinge on its
seamless integration with existing processes, compatibility with current
infrastructure, and user acceptance. Regulatory compliance, scalability, reliability,
and minimal operational impact are also key considerations. The system's
14
maintenance requirements, risk management, and customer experience implications
further contribute to determining its operational viability. Ultimately, a thorough
evaluation of these factors is essential to ascertain the practicality and effectiveness
of a counterfeit money detection system.
iv Schedule Feasibility
The time given for the completion of this project was a whole semester. So, the
project had enough time for completion. Since the project has some machine
learning mechanisms the project took some more time than done usually. Hence,
the project has been developed according to the following time schedule to make
our application schedule feasible.
15
3.1.3 Analysis
Figure 3. 2 ER-Diagram
User Entity:
The "User" entity is a fundamental component of the counterfeit money detection system,
representing individuals who engage with the system. It is characterized by four main
attributes:
• user_id: This attribute serves as a unique identifier for each user in the system,
ensuring individuality and facilitating relational links.
16
• username: The username attribute represents the chosen identifier or display name
selected by the user during the registration process.
• email: This attribute stores the user's email address, providing a means of
communication and contact.
• Password: The password attribute stores the secure access key chosen by the user
during registration, ensuring the confidentiality of their account.
Currency Image:
The "Currency Image" entity is a pivotal element in the counterfeit money detection system,
capturing the images submitted by users for analysis. This entity is characterized by two
crucial attributes:
• image_id: This is the attribute where the uploaded image identifier is stored.
• Upload_Date_Time: This attribute represents the timestamp when the currency
image was uploaded to the system. It serves as an essential chronological reference,
aiding in tracking and organizing the images based on their submission time.
• image_data: The image_data attribute stores the binary data or reference to the
actual image file. This includes the content of the currency image uploaded by the
user. It may involve the image file itself or a link/reference pointing to the stored
image data.
The "Detection Result" entity is a critical component of the counterfeit money detection
system, recording the outcomes of the analysis performed by the system. This entity is
characterized by several key attributes:
• result_id: This attribute serves as a unique identifier for each detection result. It
ensures individuality and allows for easy referencing and tracking of results.
• Prediction Date Time: The prediction_date attribute indicates the timestamp when
the system performed the counterfeit detection analysis. It provides chronological
information, allowing users to understand when the analysis was conducted.
• Watermark Similarity: This attribute represents the degree of similarity between
the uploaded currency image and the expected watermark of genuine currency. It
17
quantifies the likeness and contributes to determining the authenticity of the
currency note.
• Security Thread Similarity: This attribute measures the resemblance of the
uploaded currency image to the expected characteristics of a genuine currency note,
specifically focusing on features like security features or metallic elements.
• Pred_result: This is the attribute where the result of the uploaded image is stored,
indicating whether it is classified as real or fake.
• Image: This attribute stores the uploaded image in the form of an image blob.
In the DFD-0 level, the primary entities include the "user" and the "counterfeit money
detection" process. Users interact with the system through dataflows like "upload image,"
"login," and "register," all directed towards the "counterfeit money detection" process. The
"upload image" dataflow enables users to submit currency images for analysis, while
"login" and "register" processes handle user authentication and account creation,
respectively. The "counterfeit money detection" process processes the uploaded images and
generates predicted results based on the analysis. The outcomes, specifically "predicted
result" and "authentication status," are then communicated back to the user. This DFD-0
level diagram illustrates the essential interactions and dataflows between the user entity
and the counterfeit money detection process, capturing the key components for image
submission, user authentication, and result retrieval.
18
DFD-1 level Diagram
In the DFD-1 diagram, a singular user entity interacts with key processes within the
counterfeit money detection system. User authentication is handled through the "user login"
process, while the "receive image" process manages image submissions. Subsequent
processes include "image processing" for quality enhancement, "feature extraction and
analysis" for authenticity evaluation, and the central "counterfeit detection and decision"
process. Three critical data stores— "user table," "currency table," and "prediction"— store
essential information, streamlining interactions and supporting informed decision-making
in the system. This concise breakdown in the DFD-1 diagram provides a clear snapshot of
subprocesses and data management, facilitating efficient system understanding and design
considerations.
19
Chapter 4: System Design
4.1 Design
The diagram above shows an entity-Relationship (ER) diagram with three tables: ‘user’,
‘Currency image’ and ‘Detection Result’. It illustrates the transformation of an ER model
into a relational schema:
20
Normalizations
1. User Login & User Sign-Up Form: Allow user to register or log into the platform.
2. Contact-Form: Allows user to reach out the platform.
21
4.1.3 Interface Design
Interface design, often referred to as user interface (UI) design, focuses on creating visual
elements and interactions that enable users to interact with a system or application. A well-
designed interface should be intuitive, visually appealing, and user-friendly.
22
Figure 4. 5 Sign-Up Design
23
Figure 4. 7 Result
24
4.2 Algorithm Details
SVM stands for Support Vector Machine, which is a popular type of machine learning
algorithm used for classification and regression analysis.
25
The formula for standardization is as follows:
𝑂𝑟𝑔𝑖𝑛𝑎𝑙 𝑉𝑎𝑙𝑢𝑒 − 𝑀𝑒𝑎𝑛
𝑆𝑡𝑎𝑛𝑑𝑎𝑟𝑑𝑖𝑧𝑒𝑑 𝑉𝑎𝑙𝑢𝑒 =
𝑆𝑡𝑎𝑛𝑑𝑎𝑟𝑑 𝐷𝑒𝑣𝑖𝑎𝑡𝑖𝑜𝑛
26
9. Calculate the Confusion Matrix for testing:
Confusion matrix is calculated using the confusion_matrix function. It takes two
arguments: test_labels, which represent the true labels of the test data, and
test_predictions, which represent the predicted labels of the test data.
Convolutional Neural Networks (CNNs) are a class of deep learning algorithms specifically
designed for processing and analyzing visual data, such as images and videos. [12] Here's
an overview of the key components and operations involved in a matching percentage with
CNN:
27
b. Max-pooling layer (MaxPooling2D):
Max-pooling reduces the spatial dimensions of the feature maps, retaining
the most important information and decreasing computational complexity.
c. Global Average Pooling (GlobalAverageePooling2D):
Global Average Pooling is applied globally across the entire feature map.
Instead of dividing the input into local regions and computing the maximum
or average within each region, Global Average Pooling computes the
average of each feature map across its entire spatial dimensions. The result
is a single value per feature map.
1 𝐻 𝑊
𝐺𝐴𝑃(𝑋𝑖 ) = ∑ ∑ 𝑋𝑖 (ℎ, 𝜔)
𝐻×𝑊 ℎ=1 𝜔=1
e. Output Layer:
The output layer produces the final prediction. For binary classification
(counterfeit money detection) multi-class classification, a softmax
activation is used as there is two class Real and Fake. The softmax function
ensures that the sum of all our output probability values will always be equal
to one so that one can easily see which classified choice has the highest
probability of being the right one.
Where, zi represents raw score for class i
N is the total number of classes
28
This formula calculates the weighted average of the RGB (Red, Green, Blue) values
to obtain a single grayscale value.
3. Detect Watermark:
The detect_watermark function is used to detect a watermark in the preprocessed
images. It uses template matching (cv2.matchTemplate) with a set of watermark
template images.
29
∇𝑗(𝜔(𝑜𝑙𝑑) is the gradient of the loss function ∇𝑗 with respect to the weight
𝜔
30
Chapter 5: Implementation and Testing
5.1 Implementation
The process of checking and confirming a software or application is called testing. Initial
testing of the system by the developer themselves comes first. It aids in ensuring that the
system complies with the requirements. This chapter uses pseudocode and diagrams to
explain the theory underlying the implementation of intriguing and difficult features. With
a well-structured plan and the right development approach, the implementation phase of
any project may be the most pleasurable and trouble-free phase.
Different tools and technologies have been used to implement this application. They are
listed in the table below:
1. Programming Languages:
i Python
The system is developed using Python as the primary programming language,
leveraging a set of powerful tools and libraries. File and folder manipulation tasks
are handled using the os module, while image processing functionalities are
achieved through the cv2 (OpenCV) library for reading, processing, and saving
images, alongside the numpy library for numerical operations. The deep learning
and neural network components are implemented with the Keras libraries, allowing
the construction of a custom model architecture designed for watermark and silver
line detection in images. The code also makes use of Python's control flow
statements, mathematical functions from the math module, and text file operations
for conditional execution, rounding, and saving class indices to a file. Additionally,
the matplotlib.pyplot library aids in visualizing training metrics. Overall, Python's
versatility and the robust toolset provided by these libraries enable the effective
implementation of image processing and machine learning functionalities in the
code.
31
ii. HTML
In the system, the structure and styling of a web page for a system named
"DETECTO" are defined. The HTML includes metadata, links to external
stylesheets and scripts, as well as embedded styles and scripts. The page features a
responsive navigation menu, social media icons, and a prominent header with the
DETECTO logo. It also incorporates sections for displaying an introductory
message, an image upload panel, and a button to trigger image processing. The
JavaScript script handles events such as clicking the "See Full Details" button,
redirecting users based on a prediction result. This HTML code is integral to the
user interface and functionality of the system, facilitating image input, processing,
and result presentation. In the documentation, this HTML code should be described
as the front-end structure of the DETECTO system, outlining its key components
and interactions, especially focusing on user input and result presentation features.
iii. CSS
The provided HTML document contains embedded CSS styles that define the visual
presentation of the web page. The styles are written within <style> tags in the
document's <head> section and include rules for various elements such as
containers, images, buttons, and form elements. The CSS utilizes selectors to target
specific HTML elements and applies properties like color, padding, positioning,
and hover effects to achieve the desired appearance. Additionally, the document
incorporates some inline styles directly within HTML elements. The CSS rules also
include media queries for responsiveness, ensuring a consistent layout across
different screen sizes. Overall, the CSS in this document is structured to create a
visually appealing and responsive user interface for the DETECTO website, with
specific styling for buttons, image displays, and other elements to enhance the user
experience.
iv. Bootstraps
Bootstrapping is used to provide more stable training by exposing the models to
different subsets of the data in each iteration. It allows you to observe how the
models' performance changes when certain instances or features are included or
excluded.
32
v. Json
There is a small snippet of JSON code embedded in the HTML within a <script>
tag using the type "application/ld+json". This JSON code defines an organization
with properties such as name, logo, and social media links. JSON is a lightweight
data interchange format that is often used to structure and exchange data between a
server and a web application. In this case, it is used for providing structured data
about the organization for search engines and other applications that may consume
this information.
ii. Scikit-learn
the scikit-learn (sklearn) library is used in the provided code. Specifically, it is used
for train-test splitting and scaling features. The custom_train_test_split function is
a custom implementation of a train-test split, which is typically handled by the
train_test_split function in scikit-learn. Additionally, a custom standard scaler
(CustomStandardScaler) is implemented, which is similar to the functionality
provided by scikit-learn's StandardScaler. These scikit-learn functionalities are
utilized to preprocess and split the data for training and testing the custom Support
Vector Machine (SVM) model.
33
3. Deep Learning Frameworks:
Keras
In the provided code, Keras is utilized for constructing, compiling, and training a
convolutional neural network (CNN) model. Custom layers, such as CustomDense
and inverted_residual_block, are implemented using Keras's Layer class. The
MobileNetV2 architecture is defined using Keras layers, and the model is compiled
with the Stochastic Gradient Descent (SGD) optimizer, binary crossentropy loss,
and accuracy as the evaluation metric. Keras's ImageDataGenerator is employed for
data preprocessing and augmentation, facilitating the creation of batches for
training and validation. The training process is orchestrated using the fit method,
and the ModelCheckpoint callback from Keras is employed to save the best model
weights based on validation loss. Finally, the trained model is saved using Keras's
save method, resulting in two files: one for the model checkpoint during training
(watermark_mobilenetmodel_checkpoint.h5) and another for the final trained
model (watermark_mobilenetmodel_final.h5). Keras streamlines the development
of deep learning models by providing a high-level API and abstracting many
complexities associated with neural network implementation.
5. Development Frameworks:
Flask
In the provided code, Flask is utilized to create a web application for predicting
whether an uploaded image contains counterfeit money. Flask is a Python web
framework that simplifies the process of building web applications. The application
defines routes, such as "/", "/result", and "/predictt", each associated with specific
functionalities. When a user accesses the specified routes, Flask invokes the
34
corresponding functions, rendering HTML templates, processing image uploads,
and providing predictions. Flask seamlessly handles HTTP requests and responses,
enabling the integration of the image processing logic, SVM prediction, and web
interface. Additionally, the app.run() method is employed to launch the Flask
application, allowing it to run locally with debugging capabilities. The code
demonstrates the use of Flask's simplicity and flexibility in building a web-based
counterfeit money detection system.
6. Diagram tool
Draw.io plays a pivotal role in our Counterfeit Money Detection System's
development by serving as the go-to tool for Entity-Relationship (ER) diagrams and
Data Flow Diagrams (DFD), offering a comprehensive visualization of data
structures and process flows. Meanwhile, Figma takes the lead in crafting the
system's user interface, ensuring an intuitive and visually appealing design.
7. SQLite
In your project, SQLite is employed as a relational database to manage user-related
data, currency images, and detection results. The user_id table likely contains user
information, while the currency_image table stores images along with relevant
details, linked to users via foreign keys. The detection_result table records outcomes
of currency detection processes, associating results with specific users and currency
images. SQL queries enable seamless interaction with these tables, facilitating tasks
such as data insertion, retrieval, and management within the application. Overall,
SQLite provides a lightweight and efficient solution for organizing and accessing
essential data in the context of user accounts, currency images, and detection results.
35
5.1.2 Implementation Details of Algorithm:
2. Model Initialization:
Within the CustomSVM class, the weights and bias of the SVM model are
initialized randomly during instantiation. These parameters will be adjusted during
the training process to create an optimal decision boundary.
4. Prediction Function:
The predict method of the CustomSVM class is responsible for making predictions
based on the learned weights and bias. It calculates the dot product of the feature
vector and weights, then applies a threshold to determine the predicted class (1 or
0).
36
Figure 5. 1 Custom SVM
5. Feature Scaling:
A custom standard scaler (CustomStandardScaler) is defined and applied to scale
the features. The scaler's mean and standard deviation are used to normalize the
data.
37
Figure 5. 2 CustomStandardScalar
6. Model Saving:
The trained SVM model and the scaler used for feature scaling are saved as joblib
files ('custom_svm_model.joblib' and 'scaler_model.joblib', respectively).
38
Implementation of CNN algorithm:
2. Watermark detection.
The watermark detection involves preprocessing both input images and watermark
templates. Template matching is applied, measuring normalized cross-correlation.
If the match surpasses a threshold, a bounding box is drawn on the image, signifying
watermark detection. The script outputs a matching percentage for confidence
assessment.
39
3. Security thread Detection:
The detect_silverline function uses template matching (cv2.matchTemplate) to
identify security thread in the preprocessed images. If a security thread is detected,
a bounding box is drawn around it, and the matching percentage is printed.
40
Figure 5. 6 Custom CNN model
41
Figure 5. 7 Training Custom CNN
5.2 Testing
Unit testing is performed for testing modules against detailed design. Every step of the
project's design and coding has been tested. While testing, we test the module interface to
make sure that data is properly flowing into and out of the program unit. By inspecting the
local data structure, we ensure that the temporarily stored data keeps its integrity during the
algorithm's execution. Finally, each path that handles errors is tested.
42
Table 5. 1 Test case For Registration
43
L_2 User enters Enter your email: Display “Please Pass
invalid [email protected] Error enter a
login Enter your password: 098765 Message correct
information username
and
password.”
44
Table 5. 4 Test Case for Model Accuracy
System testing is typically done to look for faults brought on by unexpected interactions
between subsystem and system components. Once the source code is developed, software
must be tested to identify and correct any potential mistakes before being delivered to the
client.
45
Figure 5. 8 Fake image
46
5.3 Result Analysis
The analysis of the counterfeit money detection system's results provides crucial insights
into its performance and effectiveness. By meticulously examining various performance
metrics, we aim to ascertain the system's ability to accurately distinguish between genuine
and counterfeit currency. This analysis serves as a critical step in evaluating the system's
reliability, identifying areas for improvement, and ensuring its robustness in real-world
scenarios.
1. Data Collection
Our counterfeit money detection system benefits from a robust dataset comprising
a total of 472 samples. This dataset was meticulously compiled from the repository
https://amitness.com/ml-datasets/ to ensure comprehensive coverage and relevance
to our detection system's objectives. This dataset encompasses images depicting
authentic banknotes alongside their counterfeit counterparts. Notably, it features a
comprehensive array of Nepalese currency denominations, including 10, 50, 100,
and 1000 rupee notes. Despite the availability of various denominations within the
dataset, we deliberately opted to concentrate solely on the 1000-rupee
denomination. This decision was made to streamline our development efforts,
allowing us to focus exclusively on optimizing the detection system's efficacy in
identifying counterfeit 1000-rupee notes within the Nepalese context. Furthermore,
it's important to note that we captured the images ourselves and incorporated them
into the dataset, ensuring a high degree of relevance and authenticity for our training
data.
47
2. Epoch Progress: Training Metrics
The training and testing accuracy of an SVM model trained over 100 epochs is
represented in the graph below.
The training and testing loss of an SVM model trained over 100 epochs is
represented in the graph below.
48
The training and validation accuracy of CNN model for Watermark is trained over
100 epochs is represented in the graph.
The training and validation loss of CNN model for Watermark is trained over 100
epochs is represented in the graph.
49
The training and validation accuracy of CNN model for Security Thread is trained
over 100 epochs is represented in the graph.
The training and validation loss of CNN model for Security Thread is trained over
100 epochs is represented in the graph.
50
3. Performance Metrics
Evaluating our counterfeit money detection system involves assessing key
performance metrics. Accuracy measures how often it correctly identifies
counterfeit bills. Precision and recall indicate the system's ability to accurately
classify counterfeit bills and identify genuine ones, respectively. The F1 Score
provides a balanced evaluation of overall performance, while false positive and
false negative rates reveal the system's tendencies for misclassification. By
analyzing these metrics, we can determine the system's reliability in distinguishing
between genuine and counterfeit currency.
Performance metrics were computed separately for two components of our system:
the Support Vector Machine (SVM) for classification and the Convolutional Neural
Network (CNN) for determining the similarity percentage of watermark and
security thread.
The SVM classifier was trained on a dataset containing examples of both genuine
and counterfeit currency. This dataset provided the basis for the SVM to learn how
to distinguish between the two classes using features extracted from the currency
images. We then assessed how well the classifier performs on this training data to
understand its ability to correctly classify genuine and counterfeit currency
instances.
51
Figure 5. 17 SVM: Accuracy for Testing Data Table
The SVM classifier trained on genuine and counterfeit currency data exhibited exceptional
performance metrics, including perfect classification accuracy, precision, recall, F1-score,
and support for both genuine and counterfeit currency instances. For the SVM classifier
tested on genuine and counterfeit currency data, it achieved an overall accuracy of 94% on
the testing data, demonstrating strong performance with high precision, recall, and F1-
scores for both genuine and counterfeit currency classes.
52
Figure 5. 19 CNN (Watermark): Accuracy for Validation Data Table
For CNN analysis of watermark similarity, the model achieved high precision, recall, F1-
scores, and accuracy of 96%, indicating its effectiveness in distinguishing between genuine
and counterfeit currency.
Similarly, for CNN analysis of security thread similarity, the model showcased
commendable proficiency with an accuracy of 93%.
53
In a counterfeit money detection system using SVM classification with CNN for matching
percentage evaluation, the confusion matrix provides a concise summary of the system's
performance, offering insights into the accuracy and reliability of the system in
distinguishing between genuine and counterfeit currency. By analyzing these metrics, the
system's effectiveness can be evaluated, guiding further improvements to enhance its
performance in identifying counterfeit currency accurately.
4. Confusion Matrix
In a counterfeit money detection system using SVM classification with CNN for
matching percentage evaluation, the confusion matrix provides a concise summary
of the system's performance. It outlines four key scenarios: True Positives (genuine
currency correctly classified), True Negatives (counterfeit currency correctly
classified), False Positives (counterfeit currency classified as genuine), and False
Negatives (genuine currency classified as counterfeit), offering insights into the
accuracy and reliability of the system in distinguishing between genuine and
counterfeit currency. By analyzing these metrics, the system's effectiveness can be
evaluated, guiding further improvements to enhance its performance in identifying
counterfeit currency accurately.
54
The confusion matrix for a CNN analysis of watermark similarity is as follows:
The confusion matrix for a CNN analysis of security thread similarity is as follows:
55
Chapter 6: Conclusion and Future Recommendations
6.1 Conclusion
The SVM algorithm serves as the initial line of defense, effectively discerning between
authentic and counterfeit currency based on crucial features such as watermark and security
thread present in the uploaded image. Through intuitive user interaction, where a threshold
value can be adjusted to fine-tune the detection process, users gain real-time insights into
the authenticity of the currency, enabling informed decision-making. The accuracy of
testing data using SVM stands at 94%.
Moreover, the incorporation of CNN enhances the system's functionality by providing users
with detailed insights into the matching percentage of watermark and security thread of the
uploaded image. This algorithm offers more comprehensive details about the uploaded
image. The validation data accuracy for detecting the watermark is 91%, while for detecting
the security thread, validation data accuracy is 93%.
Technology is advancing at a rapid pace these days. In this system, we were able to detect
watermark and security thread. We’ve looked at the whole image of money so far, but in
the future, we'll try to include all of the security features of money by using a fair
fundamental structure and providing sufficient preparation information. When a picture is
loaded into the training folder from the outside, it does not provide 100 percent accuracy.
By optimizing the system, we can solve this problem.
56
References
[4] A. Zarin and J. Uddin, "" A HYBRID FAKE BANK NOTE DETECTION MODEL
USING OCR, FACE RECOGNITION AND HOUGH FEATURES"," in
Cybersecurity and Cyberforensics (CCC), 2019.
[7] s. Gothe, K. Naik and V. Joshi, Fake currency detection using image processing
and machine learning, 2018.
[8] V. L. Nadh and S. Prasad, "Support vector machine in the anticipation of currency
markets," Int. J. Eng. Technol, vol. 7(2), p. 66–68, 2018.
[9] V. V and L. M, "Real time fake currency note detection using deep learning.," Int. J.
Eng.Adv. Technol. (IJEAT), vol. 9(1S5), pp. 2249-8958, 2019.
57
[10] A. T, B. G, W. P and C. P, "Fake currency detection using image processing,"
IOPConf.Ser.Mater.Sci.Eng., p. 263, 2017.
[11] T. Agasti, "Fake currency detection using image processing," IOP Conference
Series: Materials Science and Engineering, p. 243, 2017.
58
Appendix
59
Figure 7. 2 Login Page
60
Figure 7. 4 Image with 0.7 threshold
61
Figure 7. 6 Image with 0.9 Threshold
62