0% found this document useful (0 votes)
113 views19 pages

Currency Detection For Blind People

The document discusses developing a mobile application to help visually impaired people identify Indian currency notes through image recognition and text-to-speech output. It aims to create a dataset of currency images, use techniques like SIFT to extract features, leverage color information, and provide accurate and real-time recognition of notes along with audio feedback.

Uploaded by

haffah1245
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
113 views19 pages

Currency Detection For Blind People

The document discusses developing a mobile application to help visually impaired people identify Indian currency notes through image recognition and text-to-speech output. It aims to create a dataset of currency images, use techniques like SIFT to extract features, leverage color information, and provide accurate and real-time recognition of notes along with audio feedback.

Uploaded by

haffah1245
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 19

1.

1 ABSTRACT

In this paper we introduced a mobile system for currency


recognition that recognizes Indian currency in different view and scale. In this
paper, we developed a dataset for Indian currency on an Android Platform. After
that we applied automatic mobile recognition system using a smart phone on the
dataset using scale-invariant feature transform (SIFT). Colour provides
significant information and important values in the object description process
and matching tasks. Many objects cannot be classified correctly without their
colour features. One of the most important problems come up against visual
impaired people is currency identification especially for currency note. In this
system we introduce a simple currency recognition system applied on Indian
banknote

1
1.2 OBJECTIVES

1. Develop a Mobile Currency Recognition System:


To create a system capable of recognizing Indian currency notes of various
denominations through a mobile application, catering specifically to the needs
of visually impaired individuals.

2. Create a Specialized Dataset for Indian Currency:


To develop and utilize a comprehensive dataset of Indian currency notes that
Includes images captured in different views and scales, facilitating the
training of a machine learning model for accurate currency recognition.

3. Implement Image Processing Techniques:


To employ advanced image processing techniques, such as Scale-Invariant
Feature Transform (SIFT), for the extraction of relevant Features from
currency note images, enhancing the system's recognition capabilities.

4. Utilize Color Information in Currency Recognition:


To leverage color as a significant feature in the object description and matching
tasks, acknowledging that many objects, including currency notes, cannot be
accurately classified without considering their color features.

5. Assist Visually Impaired Individuals in Currency Identification:


To address the challenge faced by visually impaired people in identifying
currency notes, by providing an automated solution that enables them to
determine the denomination of Indian banknotes through an accessible

2
mobile application.

6. Incorporate Real-time Processing and Audio Output:


To ensure the system processes currency note images in real-time and delivers
immediate audio feedback to the user, stating the identified denomination,
thereby enhancing the user experience and providing instant assistance.

7. Deploy a Camera-based Recognition System:


To focus on a camera-based approach that allows users to capture images
of currency notes using their smartphones, making the system more practical
and user-friendly for visually impaired individuals who may find it challenging
to use scanner-based systems.

8.Achieve High Accuracy and Usability:


To design the system with the goal of achieving high accuracy in currency
recognition under various conditions, such as different lighting situations, and
ensuring the application is easy to use for the target audience.

9.To provide a user-friendly interface accessible by all users.

10.To assist visually impaired individuals in identifying currency notes.

11. To ensure high accuracy in currency detection using machine learning.

3
These objectives outline the project's aim to develop a technologically
advanced, user-friendly, and accessible mobile application for the
visually impaired, facilitating their independence and confidence in financial
transactions by enabling them to identify currency notes accurately.

1.3 INTRODUCTION

The "Detection of Currency Notes for Blind People" project aims to develop
a system that assists visually impaired individuals in identifying different
currency notes. By utilizing image processing and machine learning
techniques, the system provides an audible output to inform users about the
denomination of a currency note. The ability to identify currency without
human input is unfavourable for a number of applications. Probably the most
important one is assisting visually impaired people. About 165 persons per
lakh persons were visually disabled. Among them 82 percent were blind and
18 percent had low vision. Recent development of mobile platforms makes
the idea of currency recognition with a smart phone an appealing one. We are
representing an app in which currency is recognized by app and result is sent
through audio devices. One of the main problems resist by people with visual
impaired is the incapacity to identify the paper currencies due to the
approximation of paper texture and size between the different currencies.
Hence, the role of this system is to develop a solution to resolve this trouble
to make blind people feel safety and determination in the financial approach.
There are two types in currency recognition research field;

“Scanner-based and Camera-based”

Scanner-based systems supposed to scan the whole paper. Such systems are
suitable for the equipment of currency counters. While camera based systems
except capturing the currency by a camera which may capture a part of the

4
currency. Most related works in documentation assign with the scanner-based
type [2-5]. For visual disabled usage, it’s assumed to enable users to capture
any part of the currency by their mobile phone and let the system identify it
and notify the currency value. In this paper, camera-based Indian currency is
trained to be identified using very simple image processing equipment’s what
makes the processing time is very short with allowable authority. The present
systems have the skill to tend currency captured limitedly and contrast
lighting situations. Identification of various denominations of currency is not
an easy task for visually impaired people. In India though, There are special
symbols embossed on different denominations, still the task is tedious for
blind people. The lack of identification Devices motivated the need of a
handheld device for segregation of different denominations.

In this project, Features of the images are compared with all the reference
images of the currency, if the difference is less than the threshold then the
numeric part of the Currency is extracted and compared if it matches then the
matched currency denomination is recognized. Indian currency Denomination
like 10,20,50,100, 500, 2000, currency are recognized Development of Real
Time Paper Currency Identification and Audio Output System using open
source.

The system is divided into two parts.

 The first part is to identify the currency denomination through image


processing.

 The second part is the oral output to notify the visually impaired person
about the denomination of the note that he/she is currently having

5
Features:

• Image Capture: The system uses a camera or mobile device to capture


images of currency notes.
• Image Processing: Employing image processing algorithms, the
system extracts relevant features from the currency note images.
• Machine Learning Model: A trained machine learning model
recognizes and classifies the currency notes based on the extracted
features.

• Text-to-Speech Integration: The system converts the recognized


denomination into speech, allowing the user to hear the identified
currency note.
• Real-time Processing: The system provides quick and real-time
feedback, enhancing user experience.

6
1.4 LITERATURE REVIEW

REFERENCE:

• Smarti Kotwal. “Image processing based heuristic analysis for enhanced


currency recognition”. In: International Journal of Advancements in
Technology 2.1 (2011), pp. 82–89.
• Vishnu R, BiniOmman,
“Principal Features for IndianCurrencyRecognition”
2014AnnualIEEEIndiaConference (INDICON).
• Binod Prasad Yadav.
“IndianCurrencyRecognitionandVerificationSystemUsing
ImageProcessing”. International Journal Of Advanced Research in
Computer Science and Software Engineering 4.12 (2014).
• Khadijatul Kubra, Baswaraj Gadgay and Veeresh Pujari,
“SmartRecognition system for visually impaired people”,
IJRASET Journal, 2017.
• K Shilpa Reddy, S.K Mounika, K Pooja and N Sahana, “Text to Speech for
the Visually Impaired”, IRJCS Journal, 2017.

7
• Snehal Saraf, Vrushali Sindhikar, Ankita Sonawane and Shamali Thakare,
“Currency Recognition System for Visually Impaired”, IJARIIE Journal,
2017.
• A report on "DISABLED PERSON" based on data collected in state survey.
• P. Viola and M. J. Jones, “Robust real-time face detection,” In IJCV 57(2),
pp. 137–154, 2004.
• S. Singh, S. Choudhury, K. Vishal and C. V. Jawahar, Currency
Recognition on Mobile Phones, 22nd
• International Conference on Pattern Recognition (ICPR), Sweden, (24
August 2014), pp: 2661-2666, IEEE

REFERENCE WEBSITE:

• www.teacheablemachine.com
• www.jetir.org
• http://ijariie.com

8
1.5 ANALYSIS AND REQUIREMENTS

ANALYSIS:

1.User Needs:

 Visually impaired individuals need a reliable and efficient method to


distinguish between different currency notes.
 The solution should be accessible and easy to use independently.

2. Accuracy:

9
 The system must accurately identify the denomination of each currency
note to prevent errors in transactions.
 False positives and false negatives should be minimized to avoid
confusion.

3. Speed:

 Quick detection is essential for seamless transactions.


 The system should provide results in real-time or near-real-time to avoid
delays.

4. Adaptability:

 The solution should be adaptable to different currencies used globally to


cater to a diverse user base.
 It should also accommodate new currency designs or variations.

5. Portability:

 The system should be portable and easily integrated into existing assistive
devices or smartphone applications.

REQUIREMENTS:

1. MACHINE LEARNING MODULE:

 Implement machine learning models trained on currency images to


accurately recognize different denominations.

 Models should be capable of handling variations in currency designs,


lighting conditions, and orientations.

2.ACCESSIBLE INTERFACE:

10
 Develop an intuitive user interface that can be navigated using touch or
voice commands.

 Provide audio feedback to assist users in understanding the detected


currency.

3. REAL-TIME PROCESSING:

 Ensure the system can process currency detection quickly to provide


timely feedback to users.

4. CROSS-PLATFORM COMPATIBILITY:

 Build the solution to work across various platforms, including


smartphones, tablets, and specialized assistive devices.

5.PRIVACY AND SECURITY:

 Implement measures to protect users' privacy and prevent misuse of


personal information.

 Ensure secure data transmission and storage.

1.6 SYSTEM ARCHITECTURE

1. Image Capture Module:


Utilizes a camera or mobile device to capture images of currency notes.

2. Image Processing Module:

11
Extracts features such as color, size, and patterns from the currency
note images.

3. Machine Learning Model:


Trained using a dataset of currency note images to recognize and
classify denominations.

4. Text-to-Speech Integration:
Converts the recognized denomination into spoken words for the user.

5. User Interface:
Provides a simple and accessible interface for users to interact with the
system. Workflow
Image Capture: The user captures an image of a currency note using
the designated device.
Image Processing: The system processes the captured image to extract
relevant features.
Machine Learning Classification: The features are input into the
machine learning model, which predicts the denomination of the
currency note.
Text-to-Speech Output: The recognized denomination is converted into
speech and communicated to the user.

PROPOSED SYSTEM:

12
Fig -1. Architecture

Image retrieval:

The first stage of any vision system is the image acquisition stage.
After the image has been obtained, various methods of processing can be

13
applied to the image to perform the many different tasks. Performing image
acquisition in image processing is always the first step in the workflow
sequence because, without an image, no processing is possible. There are
various ways to obtain image such as with the help of camera or scanner.
Acquired image should keep all the features.

Pre-processing:

The main goal of the pre-processing to increase the visual appearance


of images and improve the impact of datasets. Pre-processing of image are
those operations that are normally required earlier to the main data analysis
and extraction of information. Image preprocessing, also called image
restoration, involves the correction of distortion, degradation, and noise
introduced during the imaging process. Image pre-processing can notably
increase the accuracy of an optical inspection. Image Adjusting is done with
the help of image interpolation. Interpolation is the technique mostly used for
tasks such as zooming, rotating, shrinking etc. Removing the noise is an
important step when image processing is being performed. However, noise
can affect segmentation and pattern matching. When performing smoothing
process on a pixel, and neighbor of the pixel is used to do some conversion.
After that a new value of the pixel is created.

Match input image with datasets:

In order to confirm image similarity, we check whether the key points


in the test image are in spatial consistency with the retrieved images. We use
14
the popular method of geometric verification (GV) by fitting fundamental
matrix (adopted from [16]) to find out the number of key points of the test
image that are spatially consistent with those of the retrieved images. The
recognized text codes are recorded in script files. Then we employ the text to
speech converter to load these files and display the audio output of text
information. Blind users can adjust speech rate, volume and language
according to their preferences.

Audio output generation:

The recognized text codes are recorded in script files. Then we employ
the text to speech converter to load these files and display the audio output of
text information. Blind users can adjust speech rate, volume and language
according to their preferences.

1.7 IMPLEMENTATION

1. HARDWARE:

15
 Use a camera-equipped device, such as a smartphone or specialized
device, to capture images of banknotes.
 Alternatively, dedicated handheld devices with built-in cameras and
currency detection capabilities can be used.

`2. SOFTWARE:

 Develop or utilize existing image processing algorithms to analyze


the images captured by the camera.
 Implement algorithms to detect and recognize features unique to each
currency, such as color, size, patterns, and specific markings (e.g.,
serial numbers, denominations).
 Utilize machine learning models, such as convolutional neural
networks (CNNs), to train the system to recognize different currencies
accurately.
 Incorporate text-to-speech technology to provide auditory feedback to
the user about the detected currency denomination.

3. USER INTERFACE:

 Design a user-friendly interface that allows visually impaired users to


easily capture images of banknotes and receive feedback about the
detected currency.
 Include tactile feedback or voice commands to assist users in
positioning the banknote correctly for scanning.

4. ACCESSIBILITY FEATURES:

16
 Ensure that the application or device complies with accessibility
standards, such as providing high contrast options, compatibility with
screen readers, and support for alternative input methods.

5.TESTING AND CALIBRATION:

 Test the system with a diverse range of banknotes to ensure accuracy


and reliability across different currencies and denominations.
 Implement calibration procedures to account for variations in lighting
conditions, angles, and image quality.

6. UPDATES AND MAINTENANCE:

 Regularly update the software to improve accuracy, add support for


new currencies, and address any issues or bugs that arise.
 Provide ongoing technical support and assistance to users to ensure the
smooth functioning of the currency detection system.

Overall, currency detection for visually impaired individuals requires a


combination of advanced image processing techniques, machine learning
algorithms, and accessible user interfaces to provide accurate and reliable
assistance in identifying banknotes.

1.8 EXPERIMENTAL RESULT:

17
Here, we capture the image through android mobile which is given as input
image

After the matching process, the audio will be generated.

Implementation Details:

Using flutter framework as a front-end and TensorFlow as a back-end in vs


code
Libraries/Frameworks:
OpenCV for image processing, scikit-learn for machine learning, and a text-to
speech library for audio output.

Deployment:

The system can be deployed on mobile devices with a camera, making it


portable for the convenience of visually impaired users.

1.9 CONCLUSIONS:

18
In this project, to deal with the common aiming problem for blind users, we
have proposed a mobile application for currency recognition that recognizes
Indian currency to help blind persons in their daily lives. In this project, we get
the output in the form of regional audio. The "Detection of Currency Notes for
Blind People" project aims to empower visually impaired individuals by
providing a reliable and efficient tool for currency identification. The
integration of image processing, machine learning, and text-to-speech
technologies creates a user-friendly and accessible solution for daily use. This
work will be extended as to apply the classification to compare the original or
forgery currency. It is possible to add foreign languages which can be used in
world wide. To develop recognition of currency notes on a low-end mobile
phone for Visually Impaired persons and notify user by voice note in regional
language. In future it can be extended as to recognize foreign currency.

19

You might also like