0% found this document useful (0 votes)
31 views

IJCRT2003114

This document proposes a system to control a mouse using hand gestures captured by a webcam. The system uses color detection techniques to identify hand gestures for operations like select, click, scroll, and drag. It processes frames from the webcam using techniques like preprocessing, region extraction, and edge detection to identify hand gestures and perform corresponding mouse functions.

Uploaded by

praveen kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views

IJCRT2003114

This document proposes a system to control a mouse using hand gestures captured by a webcam. The system uses color detection techniques to identify hand gestures for operations like select, click, scroll, and drag. It processes frames from the webcam using techniques like preprocessing, region extraction, and edge detection to identify hand gestures and perform corresponding mouse functions.

Uploaded by

praveen kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

www.ijcrt.

org © 2020 IJCRT | Volume 8, Issue 3 March 2020 | ISSN: 2320-2882

A SURVEY ON MOUSE CONTROL AND


CHARACTER RECOGNITION USING HAND
GESTURES
1Delip K, 2Chandrasekar M, 3Hameed Badhusha Irfan F, 4Kalaiarasi P

1Student, 2Student, 3Student, 4Assistant Professor

1 Computer Science and Engineering, Agni College of Technology,

2 Computer Science and Engineering, Agni College of Technology,

3 Computer Science and Engineering, Agni College of Technology,

4 Computer Science and Engineering, Agni College of Technology

ABSTRACT:
As the growth of computer technology is immense, Image preprocessing, Region extraction, Noise
this paper proposes the mouse control system by reduction, Edge detection.
using hand gestures captured from webcam through a
In this project an effective hand gesture techniques
color detection technique. This system will allow the
has detection.After this stages the following
user to perform the operations such as select, click,
scroll, drag performed using different hand gestures. operations would be performed.
The proposed system uses the low-resolution
In this project an effective hand been proposed
webcam that acts as a sensor and it is able to track the
based on the preprocessing, Background subtraction
user hand. The system will be implemented by using
and Edge detection technique. The main objective of
Python and OpenCV. Hand gesture is one of the most
preprocessing is to transform the data that can be
effortless and natural way of communication.
more effectively and effortlessly processed. In the
INTRODUCTION: proposed work preprocessing can be performed by
the combination of Capturing image, Removing
One of the efficient ways of human communication noise, Background subtraction and Edge detection.
is through hand gesture which is universally accepted. Initially, the hand gesture images are captured from
It could be easily understood by any people. The the vision-based camera by using wear trackers or
experimental setup of the system uses Low-cost web gloves. By this project we are aiming to provide the
camera with high definition recording feature cost-free hand recognition software for laptops and pc
mounted on the top of the monitor of the computer or with webcam support. The project covers the hand
fixed camera on laptop, which captures the snapshot recognition tool which could be used to used to move
using Red, Green, Blue(RGB) color space from fixed the mouse pointer and performs the operations like
distance. It can be divided into four stages such as click, select, scroll, character recognition events.

IJCRT2003114 International Journal of Creative Research Thoughts (IJCRT) www.ijcrt.org 841


www.ijcrt.org © 2020 IJCRT | Volume 8, Issue 3 March 2020 | ISSN: 2320-2882
PROPOSED SYSTEM: OpenCV. But we will look into only two which are
most widely used ones, BGR to Gray and BGR to
Even though there are a number of quick access HSV.
methods are available for hand and mouse gesture,
using our project we could make use of laptop or MASKING TECHNIQUES:
webcam and by recognizing the hand gesture we
could control mouse and perform the basic operation During masking the bitwise AND operations on the
like mouse pointer controlling, select, deselect, left input image and the threshold image is performed,
click, right click, character recognition. It uses the which results in only the red color objects are
simple algorithm (CONVEX HULL algorithm) to highlighted. The result of AND operations are stored
in the res. When displaying frame, res and mask on
determine the hands, hand movements and by
three separate windows using imshow() function.
assigning an action for each movement. The system
we are implementing is written in python code which DISPLAY THE FRAME:
is more responsive and easily implemented since
python is a simple language and is platform The imshow() function is required to call the wait key
independent with a great flexibility and also portable regularly. The loop of imshow() function is done by
which is focused to create a virtual mouse and hand calling wait key. So, we call the wait key function
recognition system. The scope is restricted by your with a 1ms delay.
imagination.
MOUSE MOVEMENT:
USE OF PROPOSED WORK:
We have to first calculate the center of both detected
This virtual mouse hand recognition application uses red object which we can easily do by taking the
a finger without the additional requirements of the average of the bounding boxes maximum and
hardware. This is done by using vision based hand minimum points. now we got 2 co-ordinates from the
gesture recognition with input from a webcam. center of the 2 objects we will find the average of that
and we will get the red point shown in the image. We
METHODS: are converting the detected coordinate from camera
In this method each component of the system will be resolution to the actual screen resolution. After that
explained separately. They are the following we set the location as the mouse position. But to move
subsections: the mouse pointer it will take time. So we have to wait
till the mouse pointer reaches that point. So we started
CAMERA SETTING: a loop and we are not doing anything there we are just
waiting will the current mouse location is same as
The runtime operations are managed by the webcam assigned mouse location. That is for the open gesture.
of the connected laptop or desktop. To capture a
video, we need to create a video capture object. Image CLICKING:
can be captured frame-by-frame. But at the end don’t
forgot to release the capture. We could also apply The next step is to implement the close gesture. The
color detection technique to any image by doing operation is performed by clicking the object and
simple modifications in the code. dragging it. It is similar to the open gesture, but the
difference is we only have one object here so we only
CAPTURING FRAME: need to calculate the center of it. And that will be
placed on the location where we will position our
The infinite loop is used so that the web camera mouse pointer. Instead of mouse release operation we
captures the frames in every instance and is open will be performing a mouse press operation
during the entire course of the program. We capture
the live feed stream, frame by frame. Then we process
each captured frame which is in RGB (default) color
space to HSV color space. There are more than 150
color-space conversion methods available in

IJCRT2003114 International Journal of Creative Research Thoughts (IJCRT) www.ijcrt.org 842


www.ijcrt.org © 2020 IJCRT | Volume 8, Issue 3 March 2020 | ISSN: 2320-2882
DRAG: detect the blue bottle cap, a kernel to smooth things
along the way, an empty blackboard to store the
In order to implement the dragging we introduce a writings in white (just like the alphabet in the
variable ‘pinch flag’. It will be set to 1 if it was EMNIST dataset), a deque to store all
clicked earlier. So after clicking whenever we find the the points generated by the pen (blue bottle cap), and
open gesture we check if the pinch flag is set to 1. If a couple of default value variables.
it is set to one then Drag operation is performed
otherwise the mouse move operation is performed. Step 4:

CHARACTER RECOGNITION: CAPTURING THE WRITINGS:

Once we start reading the input video frame by frame,


Step 1:
we try to find the blue bottle cap and use it as a pen.
LOAD_DATA: We use the OpenCV’s cv2.VideoCapture() method to
read the video, frame by frame.
We use Python’s mnist library to load the data Lets
now get the data ready to be fed to the model. Splitting
Step 5: Scraping The Writing And Passing It
the data into train and test sets, standardizing the
images and other preliminary stuff. To The Mode

DEFINE MODEL:

Keras, models are defined as sequence of layers. We


first initialize a ‘Sequential Model’ and then we add
the layers with respective neurons in them

COMPILE MODEL:

Now that the model is defined, we can compile it.


Compiling the model uses the efficient numerical
libraries under the covers (the so-called backend) such
as Theano or TensorFlow. Here, we specify some
properties needed to train the network

FIT_MODEL: Here, we
train the model using a model check pointer, which
will help us save the best model.

EVALUATE_MODEL: Test accuracy


of the model on the EMNIST dataset was 91.1%.

Step 2:

Train A Convolutional Neural Network Model FLOWCHART


Step 3:

Initializing Stuff:
First, we load the models built in the previous steps.
We then create a letters
dictionary, blueLower and blueUpper boundaries to

IJCRT2003114 International Journal of Creative Research Thoughts (IJCRT) www.ijcrt.org 843


www.ijcrt.org © 2020 IJCRT | Volume 8, Issue 3 March 2020 | ISSN: 2320-2882
RESULT AND EVALUATION:
In this paper, we tried to focus on to improve the
interaction between machine and humans. Our motive
was to create this technology in the cheapest possible
way and also to create it under a standardized
operating system .This system is mainly aimed to
reduce the use of hardware components attached with
the computer. Although the application can be run in
ordinary computer having atleast 2MB front camera
with atleast Pentium processor and atleast 256MB
RAM.

CONCLUSION:
Hand gesture recognition provides the best
interaction between the human and machine. Gesture
recognition is also important for developing
alternative human computer interaction. It enables
human to interface with machine in a more natural
way. This technology has wide applications in the
fields of augmented reality, computer graphics,
computer gaming, prosthetics, and biomedical
instrumentation

REFERENCES:
[1] Abhik Banerjee, Abhirup Ghosh, Koustuvmoni
Bharadwaj,” Mouse Control using a Web Camera
based on Color Detection”,IJCTT,vol.9, Mar 2014.

[2] Angel, Neethu.P.S,”Real Time Static & Dynamic


Hand Gesture Recognition”, International Journal of
Scientific & Engineering Research Volume 4, Issue3,
March-2013.

[3] Q. Y. Zhang, F. Chen and X. W. Liu, “Hand


Gesture Detection and Segmentation Based on
Difference Background Image with Complex
Background,” Proceedings of the 2008 International
Conference on Embedded Software and Systems,
Sichuan, 29-31 July 2008, pp. 338343.

IJCRT2003114 International Journal of Creative Research Thoughts (IJCRT) www.ijcrt.org 844

You might also like