Report
Report
DR. M. MANIKANDAN
(Assistant Professor, Department of Computational Intelligence)
BACHELOR OF TECHNOLOGY
in
I
SRM INSTITUTE OF SCIENCE AND TECHNOLOGY
BONAFIDE CERTIFICATE
Certified that 18CSP109L major project this project report titled “EYEBALL CURSOR
MOVEMENT USING OPEN CV.” is the bonafide work of those who carried out the project
work under my supervision. To the best of my knowledge, the work reported herein does not
form part of any other thesis or dissertation based on which a degree or award was conferred on
an earlier occasion for this or any other candidate.
SIGNATURE SIGNATURE
II
Department of Computational Intelligence
SRM Institute of Science & Technology
own work Declaration Form
Degree/ Course: B. Tech in Computer Science and Engineering with specialization in Artificial
Intelligence and Machine learning
• Referenced and put in inverted commas all quoted text (from books, web, etc)
• Given the sources of all pictures, data, etc. that are not my own
• Not made any use of the report(s) or essay(s) of any other student(s) either past or present
• Acknowledged in appropriate places any help that I have received from others (e.g. fellow students,
technicians, statisticians, external sources)
• Compiled with any other plagiarism criteria specified in the Course handbook / University website
I understand that any false claim for this work will be penalized following the University policies and regulations
DECLARATION:
I am aware of and understand the University’s policy on Academic misconduct and plagiarism and
I certify that this assessment is my / our own work, except where indicated by referring, and that I
have followed the good academic practices noted above.
STUDENT 1 SIGNATURE:
STUDENT 2 SIGNATURE:
DATE:
III
ACKNOWLEDGEMENT
We express our humble gratitude to Dr C. Muthamizhchelvan, Vice-Chancellor, SRM Institute of
Science and Technology, for the facilities extended for the project work and his continued support.We
extend our sincere thanks to Dean-CET, SRM Institute of Science and Technology, Dr
T.V. Gopal, for his invaluable support.
We wish to thank Dr Revathi Venkataraman, Professor & Chairperson, School of Computing, SRM
Institute of Science and Technology, for her support throughout the project work.
We are incredibly grateful to our Head of the Department, Dr. R. Annie Uthra Professor Department
of Computational Intelligence, SRM Institute of Science and Technology, for her suggestions and
encouragement at all the stages of the project work
We want to convey our thanks to our program coordinator, and panel head, Dr. D. ANITHA Assistant
Professor. Department of Computational Intelligence, SRM Institute of Science and Technology, for
her suggestions and encouragement at all the stages of the project work
We register our immeasurable thanks to our Faculty Advisor, DR. Kaavya Kanagaraj Assistant
Professor, Department of Computational Intelligence, SRM Institute of Science and Technology, for her
suggestions and encouragement at all the stages of the project work
Our inexpressible respect and thanks to my guide, DR. M. MANIKANDAN Assistant Professor,
Department of Computational Intelligence, SRM Institute of Science and Technology, for her
suggestions and encouragement at all the stages of the project work at SRM Institute of Science and
Technology, for providing me with an opportunity to pursue my project under his mentorship. They
provided me with the freedom and support to explore the research topics of my interest. Their passion
for solving problems and making a difference in the world has always been inspiring.
We sincerely thank the Data Science and Business Systems staff and students, at SRM Institute of
Science and Technology, for their help during our project. Finally, we would like to thank parents,
family members, and friends for their unconditional love, constant support, and encouragement.
IV
ABSTRACT
V
TABLE OF CONTENTS
1 INTRODUCTION
2 LITERATURE SURVEY 6
3.2 MODULES 9
4 METHODOLOGY 11
VI
5 RESULTS AND DISCUSSION 14
REFERENCES 17
APPENDIX A 18
APPENDIX B 62
APPENDIX C 63
VII
LIST OF FIGURES
VIII
ABBREVIATIONS
AI Artificial intelligence
ML Machine learning
UX User Experience
X
CHAPTER 1
INTRODUCTION
As computer technologies are growing rapidly, the importance of human-computer interaction becomes
highly notable. Some persons who are disabled cannot be able to use the computers. Eyeball movement
control is mainly used for disabled people. Incorporating this eye-controlling system with the computers
will make them work without the help of other individuals. Human-computer interface (HCI) is focused
on the use of computer technology to provide an interface between the computer and the human. There is
a need to find suitable technology that makes effective communication between humans and computers.
Human-computer interaction plays an important role. Thus there is a need to find a method that spreads
an alternate way for making communication between the human and computer to the individuals who
have impairments and give them an equivalent space to be an element of the Information Society [1-5].
In recent years, human-computer interfaces have attracting the attention of various researchers across the
globe. The human computer interface is an implementation of the vision-based system for eye movement
detection for disabled people.
X
keyboard. In the age of the Internet, where most interactions take place through computer
screens, there is a need for efficient and user-friendly interfaces. Eyeball-based Cursor
Movement Control (ECMC) aims to bridge this gap by allowing users to control cursor
movement on a computer screen using their eyeballs alone. This innovative approach could
revolutionize the way we interact with computer screens and pave the way for a more
immersive and intuitive digital experience. To accomplish this, we can employ machine
learning algorithms that can accurately track and predict eye movements. By training these
algorithms on large datasets of eye movement data, we can achieve high levels of accuracy
and precision in controlling cursor movement.
1.2 OBJECTIVE:
For example, a popular approach to implementing ECMC is the use of the Tobii Pro X2- 30
eye tracker, which provides accurate and real-time eye-tracking data. This data can then be
processed by a machine learning algorithm that learns to map eye movements onto the
corresponding cursor movements on the screen. Furthermore, the use of facial recognition
and computer vision techniques could also contribute to the development of ECMC. By
analyzing the user’s facial expressions and detecting changes in facial muscle activity, the
system can accurately determine the user’s intended cursor movement. To ensure a smooth
and seamless user experience, we will need to consider factors such as eye fatigue, eyestrain,
and user comfort. To address these issues, we can incorporate features like automatic pauses,
adjustable tracking speeds, and personalized calibration processes to ensure optimal user
comfort and satisfaction. In conclusion, the development of Eyeball-based Cursor Movement
Control presents a significant opportunity to revolutionize the way we interact with computer
screens. By leveraging the power of machine learning algorithms, facial recognition, and
computer vision techniques, we can create an intuitive and efficient system that enhances.
1
1.3 SCOPE AND APPLICATION OF PROJECT:
The goal is to develop software that can track the user's eye movements through a camera feed,
detect the direction in which the eyes are looking, and translate those movements into real-time
cursor movements on a computer screen. This technology could be beneficial for individuals with
physical disabilities or limited mobility, providing them with an alternative and accessible way to
interact with a computer interface. The key challenges involved in this project include accurately
detecting and tracking the eyes in a video stream, interpreting the direction of the gaze, and
mapping these movements to control the cursor smoothly and precisely on the screen. The system
should also be responsive, reliable, and user-friendly to provide an efficient and comfortable user
experience
1.4 MOTIVATION:
This technology can be incredibly useful for individuals with physical disabilities or limitations
that prevent them from using traditional input devices like a mouse or keyboard. By allowing them
to control the cursor with their eye movements, it offers a more accessible and intuitive way to
interact with computers or assistive devices. Just as our eyes move to track and focus on different
objects, the motivation behind eyeball cursor movement using OpenCV is to create a system that
mimics this natural human behavior. By using computer vision techniques provided by OpenCV,
the goal is to develop a system that can detect and track the movement of a person's eyes and
translate that into controlling the movement of a cursor on a screen.
therefore, lies in enhancing accessibility and usability for those who may face challenges in using
conventional input methods, offering them a means to navigate and interact with technology more
seamlessly using the natural movement of their eyes.
2
1.5 HARDWARE REQUIREMENTS SPECIFICATION
➢ Processor - Pentium –IV
➢ RAM - 8 GB (min)
➢ Hard Disk - 512 GB
➢ Key Board - Standard Windows Keyboard
➢ Mouse - Two or Three-Button Mouse
➢ Monitor - SVGA
3
1.7 FUNCTIONAL REQUIREMENTS
The Functional requirements for a system describe the functionality or the services that the
system is expected to provide. These are the statements of services the system should provide
and how the system should react to particular inputs and how the system should behave in
particular situation.
User Registration: Users register with their Registration details. User Login: Users log in to
their account using a password
Live Inputs: Inputs Given By the User Requirement.
Load Model: Trained or Tested Model will be loaded. Predict Output: Output will be
predicted based on parameters.
1. DATA COLLECTION:
Gather a diverse and comprehensive dataset of skin images, including both benign and
malignant melanomas. Ensure the dataset is balanced to avoid biases in the model.
2. DATA PREPROCESSING:
Resize and standardize images to a consistent format. Perform data augmentation techniques
to increase the diversity of the dataset. Normalize pixel values to ensure consistent input to
the model. The collected skin cancer images will undergo preprocessing to improve image
quality and ensure consistency across the dataset. Common pre-processing techniques such as
image resizing, color normalization, and noise reduction will be applied to prepare the images
for analysis.
4
3. TRAINING AND TESTING:
The dataset is divided into three subsets: training, validation, and testing sets. The selected
model is trained using the training set, with hyperparameters optimized as required. To
prevent overfitting, the model's performance is assessed on the validation set. Following
training and validation, the model's effectiveness is evaluated on the testing set, utilizing
metrics like accuracy, precision, recall, and F1 score. Additionally, the confusion matrix is
analyzed to gain insights into the model's performance concerning benign and malignant
cases, providing a comprehensive understanding of its capabilities.
4. MODELING:
In deep learning, a computer model learns to perform classification tasks directly from
images, text, or sound. Deep learning models can achieve state-of-the-art accuracy,
sometimes exceeding human-level performance.
5. PREDICTING:
Prediction refers to the output of an algorithm after it has been trained on a historical dataset
and applied to new data when forecasting the likelihood of a particular outcome.
5
CHAPTER 2
LITERATURE SURVEY
The basic actions of a mouse are mouse click and mouse movement. The advanced technology replaces
this mouse movement by eye motion with the help of an OpenCV. The mouse button click is implemented
by any of the facial expressions such as blinking eyes, opening mouth, and head movement. This model
introduces a novel camera mouse driven by a 3D model based bias face-tracking technique. In personal
computer(PC) due to the standard configuration, it achieves human-machine interaction through faster
visual face tracking and provides a feasible solution to hand-free control. The face tracker used here is
based on the 3D model to control the mouse and carry out mouse operations. Gaze estimation can be used
in Head-mounted display (HMD) environments since they can afford important natural computer
interface cues. This new gaze estimation is based on 3D analysis of the human eye. There are various
commercial products that use gaze detection technology. In this method, the user has to point only one
point for calibration it will then estimate the gaze points. The facial features such as eyes and nose tips
are recognized and tracked to avoid the traditional mouse movements with the human face for human
interaction with the computer. This method can be applied to face scales in a wide range. Six-Segmented
Rectangular (SSR) filter and support vector machine are used for fast extraction of face candidates and
face verification respectively. This comprises our basic strategy for detection. Using JAVA(J2ME) for
face candidate detection, the scale adaptive face detection and tracking system are implemented to
perform left/right mouse click events when the left/right eye blinks. A camera mouse has been used for
disabled people to make an interaction with the computer. The camera mouse is used to change all roles
of traditional mouse and keyboard actions. The proposed system can give all mouse click events and
keyboard functions. In this method, the camera mouse system along with the timer acts as left click event
and blinking as right click event. The real-time eye-gaze estimation system is used for eye-controlled
mice to assist the disabled. This system based on the methodology in which a general low-resolution
webcam is used, but detects the eyes and track gaze accurately in less expense and without specific
equipment. PIR sensor is specifically used for human movement detection. This paper introduces a novel
camera mouse driven by visual face tracking based on a 3D mode. This camera has a standard
configuration for PCs with increased computation speed and also provides a feasible solution to hands-
free Control through visual face tracking. Human facial expressions can be classified as rigid motion and
6
nonrigid motions. The rigid motions are rotation and translation whereas the non-rigid motions are
opening, closing, and stretching of the mouth.
Firstly, we use a virtual eyeball model which is based on the 3D characteristics of the human eyeball.
By using a camera and three collimated IR-LEDs secondly, we will be calculating the 3D position of the
virtual eyeball and gaze vector. Thirdly the calculation of 3D eye position and gaze position on an HMD
monitor is allowed. This used simplified complex 3D converting calculations that have three reference
frames (the camera, the monitor, and the eye reference frames). Fourth, based on kappa compensation, a
simple user-dependent calibration method was proposed by gazing at one position. In our work, we are
trying to compensate for the needs of people who have had disabilities and could not be able to use
computer resources without other individual’s help. Our application mainly uses facial features to interact
with the computer hence there will be no need for hands to operate the mouse. Paralysis is a special case,
in which the loss of muscle functions in part of your body. It happens when something goes wrong with
the way messages pass between your brain and muscles. When such a thing happens, the person’s ability
to control movement is limited to the muscles around the eyes. Blinking and movement of eyes is the
only way of communication for them. To such communication defects, the assistance given is often
intrusive that is it requires special hardware or a device. The alternate way for interfacing is through a
non-intrusive communication system such as Eye Keys which work without special lightning. The eye
direction is detected if the person looks at the camera, which can be used to control various applications.
7
CHAPTER 3
Currently, most users interact with computers through traditional mouse and keyboard inputs. The visual
representation of the cursor is crucial for understanding the interface and guiding users’ interactions.
However, there are some challenges associated with this method, such as poor tracking accuracy and
potential discomfort due to the repetitive nature of cursor movement. The mouse cursor relies on a
system called eye-movement control, which tracks the movement of a user’s eyes and predicts the user’s
gaze direction. The system then calculates the mouse cursor position on the screen based on the predicted
gaze direction. This positioning of the cursor enables the user to interact with the computer screen
without touching the touchpad.
Fig-3.1-Architecture Diagram
8
3.2 MODULES:
• Data collection
• Data pre-processing
• Feature Selection
• Feature Extraction
• Machine Learning
• Model Selection
models like classification and regression. For Example- linear regression, decision tree, SVM, etc.
9
Unsupervised Techniques: These techniques can be used for unlabeled data. For Example- K-Means
Clustering, Principal Component Analysis, Hierarchical Clustering, etc. From a taxonomic point of view,
these techniques are classified into filter, wrapper, embedded, and hybrid methods.
10
CHAPTER 4
METHODOLOGY
11
4.1.2 SEQUENCE DIAGRAM:
A sequence diagram in Unified Modeling Language (UML) is a kind of interaction diagram
that shows how processes operate with one another and in what order. It is a construct of a
Message Sequence Chart.
Fig.no-4.1.2-Sequence Diagram
12
4.1.3 ACTIVITY DIAGRAM:
Activity diagrams are graphical representations of work flows of stepwise activities and
actions with support for choice, iteration and concurrency. In the Unified Modeling
Language, activity diagrams can be used to describe the business and operational step-
by-step work flows of components in a system. An activity diagram shows the overall
flow of control.
Fig.no-4.1.3-Activity Diagram
13
CHAPTER 5
5.1.1 ACCURACY:
Accuracy is one metric for evaluating classification models. Informally, accuracy is the
fraction of predictions our model got right.
● Accuracy formula:
5.1.2 PRECISION:
Precision is one indicator of a machine learning model's performance – the quality of a
positive prediction made by the model. Precision refers to the number of true positives divided
by the total number of positive predictions.
● Precision formula:
Precision = True Positive/(True Positive + False Positive)
Precision = TP/TP+FP
5.1.3 RECALL:
Recall, also known as the true positive rate (TPR), is the percentage of data samples that a
machine learning model correctly identifies as belonging to a class of interest,the “positive
class”,out of the total samples for that class.
● Recall Formula:
Recall = True Positive/True Positive + False Negative
14
Recall = TP/TP+FN
● Recall of a machine learning model will be low when the value of;
TP+FN (denominator) > TP (Numerator)
15
CHAPTER 6
From the process implemented it is cleared that the cursor can be controlled by the eyeball
movement i.e., without using hands on the computers. This will be helpful for the people
having disability in using the physical parts of the computers to control the cursor points. , we
have included face detection, face tracking, eye detection and interpretation of a sequence of
eye blinks in real time for controlling a non- intrusive human-computer interface. The
conventional method of interaction with the computer with the mouse is replaced with the
human eye movements. This technique will help the paralyzed person, physically challenged
people especially people without hands to compute efficiently and with the ease of use. Firstly,
the camera captures the image and focuses on the eye in the image using OpenCV code for
pupil detection. This results in the center position of the human eye (pupil). Then the center
position of the pupil is taken as a reference and based on that the human or the user will control
the cursor by moving left and right [6-9]. This paper's organization is described as follows.
Section II describes existing solutions to find the cursor movement using some 3D models. In
Section III we present how the cursor is working based only on Eyeball movement using
OpenCV methodology Because the cursor points can be operated by moving the eyeballs.
Without the help of others disabled people can use the computers. This technology can be
enhanced in the future by inventing more techniques like clicking events as well as to do all
the mouse movements and also for human interface systems using eye blinks. Technology
also extended to the eyeball movement and eye blinking to get the efficient and accurate
movement
16
REFERENCES
[1] Jilin Tu, Thomas Huang, Elect and Comp EngrDept, Hai Tao, ElectEnggDept, “Face as Mouse
through Visual Face Tracking” ,IEEE,(2005).
[2] EniChul Lee Kang Ryoung Park “A robust eye gaze tracking method based on a virtual eyeball
model”, Springer, pp.319-337, Apr (2008).
[3] John J. Magee, MargritBetke, James Gips, Matthew R. Scott, and Benjamin N.Waber“A Human-
Computer Interface Using Symmetry Between Eyes to Detect Gaze Direction” IEEE Trans, Vol. 38,
no.6,pp.1248-1259, Nov (2008).
[4] SunitaBarve, Dhaval Dholakiya, Shashank Gupta, DhananjayDhatrak, “Facial Feature Based Method
For Real Time Face Detection and Tracking I-CURSOR”, International Journal of EnggResearchand
App., Vol. 2, pp. 1406-1410, Apr (2012).
[5] Yu-Tzu Lin Ruei-Yan Lin Yu-Chih Lin Greg C Lee “Real-time eye-gaze estimation using a low-
resolution webcam”, Springer, pp.543-568, Aug (2012).
[6] Samuel Epstein-Eric MissimerMargritBetke “Using Kernels for a video- based mouse- replacement
interface”, Springer link, Nov (2012)
[7] Hossain, Zakir, Md Maruf Hossain Shuvo, and Prionjit Sarker. Hardware and software implementation
of real time electrooculogram (EOG) acquisition system to control computer cursor with eyeball
movement." In 2017 4th International Conference on Advances in Electrical Engineering (ICAEE), pp.
132-137. IEEE, 2017.
[8] Lee, Jun-Seok, Kyung-hwa Yu, Sang-won Leigh, Jin-Yong Chung, and Sung-Goo Cho. "Method for
controlling device on the basis of eyeball motion, and device therefore." U.S. Patent 9,864,429, issued
January 9, 2018.
[9] Lee, Po-Lei, Jyun-Jie Sie, Yu-Ju Liu, Chi-Hsun Wu, Ming-Huan Lee, Chih-Hung Shu, Po-Hung Li,
Chia- Wei Sun, and Kuo-Kai Shyu. "An SSVEP- actuated brain computer interface using phase-
tagged flickering sequences: a cursor system." Annals of biomedical engineering 38, no. 7 (2010):
2383- 2397
17
APPENDIX A
import math
class
importEye(object):
numpy as np
""" cv2
import
fromThis class
.pupil creates
import a new frame to isolate the eye and
Pupil
initiates the pupil detection.
"""
Arguments:
p1 (dlib.point): First point
p2 (dlib.point): Second point
"""
x = int((p1.x + p2.x) / 2)
y = int((p1.y + p2.y) / 2)
return (x, y)
18
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)
Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)
Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))
19
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)
Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)
Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))
20
try:
ratio = eye_width / eye_height
except ZeroDivisionError:
ratio = None
return ratio
Arguments:
original_frame (numpy.ndarray): Frame passed by the user
landmarks (dlib.full_object_detection): Facial landmarks for the face region
side: Indicates whether it's the left eye (0) or the right eye (1)
calibration (calibration.Calibration): Manages the binarization threshold value
"""
if side == 0:
points = self.LEFT_EYE_POINTS
elif side == 1:
points = self.RIGHT_EYE_POINTS
else:
return
if not calibration.is_complete():
calibration.evaluate(self.frame, side)
threshold = calibration.threshold(side)
self.pupil = Pupil(self.frame, threshold)
class GazeTracking(object):
"""
This class tracks the user's gaze.
It provides useful information like the position of the eyes
and pupils and allows to know if the eyes are open or closed
21
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)
Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)
Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))
22
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)
Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)
Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))
23
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)
Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)
Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))
24
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)
Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)
Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))
25
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)
Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)
Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))
26
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)
Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)
Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))
27
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)
Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)
Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))
28
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)
Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)
Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))
29
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)
Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)
Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))
30
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)
Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)
Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))
31
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)
Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)
Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))
32
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)
Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)
Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))
33
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)
Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)
Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))
34
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)
Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)
Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))
35
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)
Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)
Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))
36
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)
Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)
Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))
37
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)
Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)
Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))
38
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)
Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)
Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))
39
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)
Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)
Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))
40
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)
Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)
Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))
41
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)
Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)
Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))
42
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)
Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)
Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))
43
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)
Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)
Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))
44
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)
Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)
Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))
45
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)
Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)
Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))
46
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)
Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)
Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))
47
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)
Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)
Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))
48
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)
Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)
Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))
49
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)
Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)
Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))
50
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)
Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)
Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))
51
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)
Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)
Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))
52
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)
Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)
Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))
53
"""
def init (self):
self.frame = None
self.eye_left = None
self.eye_right = None
self.calibration = Calibration()
@property
def pupils_located(self):
"""Check that the pupils have been located"""
try:
int(self.eye_left.pupil.x)
int(self.eye_left.pupil.y)
int(self.eye_right.pupil.x)
int(self.eye_right.pupil.y)
return True
except Exception:
return False
def _analyze(self):
"""Detects the face and initialize Eye objects"""
frame = cv2.cvtColor(self.frame, cv2.COLOR_BGR2GRAY)
faces = self._face_detector(frame)
try:
landmarks = self._predictor(frame, faces[0])
self.eye_left = Eye(frame, landmarks, 0, self.calibration)
self.eye_right = Eye(frame, landmarks, 1, self.calibration)
except IndexError:
self.eye_left = None
self.eye_right = None
def refresh(self, frame):
"""Refreshes the frame and analyzes it.
Arguments:
frame (numpy.ndarray): The frame to analyze
54
"""
self.frame = frame
self._analyze()
def pupil_left_coords(self):
"""Returns the coordinates of the left pupil"""
if self.pupils_located:
x = self.eye_left.origin[0] + self.eye_left.pupil.x
y = self.eye_left.origin[1] + self.eye_left.pupil.y
return (x, y)
def pupil_right_coords(self):
"""Returns the coordinates of the right pupil"""
if self.pupils_located:
x = self.eye_right.origin[0] + self.eye_right.pupil.x
y = self.eye_right.origin[1] + self.eye_right.pupil.y
return (x, y)
def horizontal_ratio(self):
"""Returns a number between 0.0 and 1.0 that indicates the
horizontal direction of the gaze. The extreme right is 0.0,
the center is 0.5 and the extreme left is 1.0
if self.pupils_located:
pupil_left = self.eye_left.pupil.x / (self.eye_left.center[0] * 2 - 10)
pupil_right = self.eye_right.pupil.x / (self.eye_right.center[0] * 2 - 10)
return (pupil_left + pupil_right) / 2
def vertical_ratio(self):
"""Returns a number between 0.0 and 1.0 that indicates the
vertical direction of the gaze. The extreme top is 0.0,
the center is 0.5 and the extreme bottom is 1.0
"""
if self.pupils_located:
pupil_left = self.eye_left.pupil.y / (self.eye_left.center[1] * 2 - 10)
pupil_right = self.eye_right.pupil.y / (self.eye_right.center[1] * 2 - 10)
return (pupil_left + pupil_right) / 2
def is_right(self):
"""Returns true if the user is looking to the right"""
if self.pupils_located:
return self.horizontal_ratio() <= 0.40
def is_left(self):
"""Returns true if the user is looking to the left"""
if self.pupils_located:
return self.horizontal_ratio() >= 0.65
55
def is_center(self):
"""Returns true if the user is looking to the center"""
if self.pupils_located:
return self.is_right() is not True and self.is_left() is not True
def is_blinking(self):
"""Returns true if the user closes his eyes"""
if self.pupils_located:
blinking_ratio = (self.eye_left.blinking + self.eye_right.blinking) / 2
return blinking_ratio > 3.8
def annotated_frame(self):
"""Returns the main frame with pupils highlighted"""
frame = self.frame.copy()
if self.pupils_located:
color = (0, 255, 0)
x_left, y_left = self.pupil_left_coords()
x_right, y_right = self.pupil_right_coords()
cv2.line(frame, (x_left - 5, y_left), (x_left + 5, y_left), color)
cv2.line(frame, (x_left, y_left - 5), (x_left, y_left + 5), color)
cv2.line(frame, (x_right - 5, y_right), (x_right + 5, y_right), color)
cv2.line(frame, (x_right, y_right - 5), (x_right, y_right + 5), color)
return frame
"""
moving mouse cursor using opencv eyeball tracking logic
to move cursor we are using pyautogui
"""
#python library import statement
import cv2
from gaze_tracking import GazeTracking #python opencv gaze library to track eye ball
movement
import pyautogui
while True:
56
text = ""
left_pupil = gaze.pupil_left_coords()
right_pupil = gaze.pupil_right_coords() #getting pupil location as x and y cordinates
if len(y) > 1:
data_x = y[0]
data_x = data_x[1:len(data_x)];
data_y = y[1]
data_y = data_y[0:len(data_y)-1]
pyautogui.moveTo(int(data_x),int(data_y)) #moving mouse cursor to eye pupil x and y
right sidelocation
if cv2.waitKey(1) == 27:
break
57
import numpy as np
import cv2
class Pupil(object):
"""
This class detects the iris of an eye and estimates
the position of the pupil
"""
self.detect_iris(eye_frame)
@staticmethod
def image_processing(eye_frame, threshold):
"""Performs operations on the eye frame to isolate the iris
Arguments:
eye_frame (numpy.ndarray): Frame containing an eye and nothing else
threshold (int): Threshold value used to binarize the eye frame
Returns:
A frame with a single element representing the iris
"""
kernel = np.ones((3, 3), np.uint8)
new_frame = cv2.bilateralFilter(eye_frame, 10, 15, 15)
new_frame = cv2.erode(new_frame, kernel, iterations=3)
new_frame = cv2.threshold(new_frame, threshold, 255, cv2.THRESH_BINARY)[1]
return new_frame
Arguments:
eye_frame (numpy.ndarray): Frame containing an eye and nothing else
"""
self.iris_frame = self.image_processing(eye_frame, self.threshold)
58
"""
self.frame = frame
self._analyze()
def pupil_left_coords(self):
"""Returns the coordinates of the left pupil"""
if self.pupils_located:
x = self.eye_left.origin[0] + self.eye_left.pupil.x
y = self.eye_left.origin[1] + self.eye_left.pupil.y
return (x, y)
def pupil_right_coords(self):
"""Returns the coordinates of the right pupil"""
if self.pupils_located:
x = self.eye_right.origin[0] + self.eye_right.pupil.x
y = self.eye_right.origin[1] + self.eye_right.pupil.y
return (x, y)
def horizontal_ratio(self):
"""Returns a number between 0.0 and 1.0 that indicates the
horizontal direction of the gaze. The extreme right is 0.0,
the center is 0.5 and the extreme left is 1.0
if self.pupils_located:
pupil_left = self.eye_left.pupil.x / (self.eye_left.center[0] * 2 - 10)
pupil_right = self.eye_right.pupil.x / (self.eye_right.center[0] * 2 - 10)
return (pupil_left + pupil_right) / 2
def vertical_ratio(self):
"""Returns a number between 0.0 and 1.0 that indicates the
vertical direction of the gaze. The extreme top is 0.0,
the center is 0.5 and the extreme bottom is 1.0
"""
if self.pupils_located:
pupil_left = self.eye_left.pupil.y / (self.eye_left.center[1] * 2 - 10)
pupil_right = self.eye_right.pupil.y / (self.eye_right.center[1] * 2 - 10)
return (pupil_left + pupil_right) / 2
def is_right(self):
"""Returns true if the user is looking to the right"""
if self.pupils_located:
return self.horizontal_ratio() <= 0.40
def is_left(self):
"""Returns true if the user is looking to the left"""
if self.pupils_located:
return self.horizontal_ratio() >= 0.65
59
contours = sorted(contours, key=cv2.contourArea)
try:
moments = cv2.moments(contours[-2])
self.x = int(moments['m10'] / moments['m00'])
self.y = int(moments['m01'] / moments['m00'])
except (IndexError, ZeroDivisionError):
pass
60
Fig.A.2-DETECTING LEFT SIDE EYE
61
10.2 APPENDIX B
CONFERENCE
PRESENTATION
Our paper on EYE BALL CURSOR MOVEMENT USING OPEN CV is going to be presented
at the
IEEE 2024 Conference hosted by VIT , Vellore. Shortlisted terms will present their papers on
various fields in the conference. Our paper got accepted as paper ID/Submission: 1156
62
10.3 APPENDIX C
PLAGARISM
63