0% found this document useful (0 votes)
27 views

Report

The document presents a major project report on 'Eyeball Cursor Movement Using OpenCV' by V. Kamalesh Kumar and Arjun Reddy Patil, aimed at developing a human-computer interface for individuals with disabilities to control computers using eye movements. The project utilizes Raspberry Pi and OpenCV for pupil detection and cursor control, enhancing accessibility for users unable to use traditional input devices. It outlines the project's objectives, methodology, hardware and software requirements, and acknowledges the support received from various faculty members at SRM Institute of Science and Technology.

Uploaded by

kamal vuppala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views

Report

The document presents a major project report on 'Eyeball Cursor Movement Using OpenCV' by V. Kamalesh Kumar and Arjun Reddy Patil, aimed at developing a human-computer interface for individuals with disabilities to control computers using eye movements. The project utilizes Raspberry Pi and OpenCV for pupil detection and cursor control, enhancing accessibility for users unable to use traditional input devices. It outlines the project's objectives, methodology, hardware and software requirements, and acknowledges the support received from various faculty members at SRM Institute of Science and Technology.

Uploaded by

kamal vuppala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 73

EYE BALL CURSOR MOVEMENT USING OPEN CV

A MAJOR PROJECT REPORT


Submitted by

V. Kamalesh Kumar [RA2011026010198]


Arjun Reddy Patil[RA2011026010224]

Under the guidance of

DR. M. MANIKANDAN
(Assistant Professor, Department of Computational Intelligence)

in partial fulfillment for the award of the degree of

BACHELOR OF TECHNOLOGY
in

COMPUTER SCIENCE & ENGINEERING


with Specialization in ARTIFICIAL INTELLIGENCE AND
MACHINE LEARNING

DEPARTMENT OF COMPUTATIONAL INTELLIGENCE


COLLEGE OF ENGINEERING AND TECHNOLOGY
SRM INSTITUTE OF SCIENCE AND TECHNOLOGY
KATTANKULATHUR-603203
MAY 2024

I
SRM INSTITUTE OF SCIENCE AND TECHNOLOGY
BONAFIDE CERTIFICATE

Certified that 18CSP109L major project this project report titled “EYEBALL CURSOR
MOVEMENT USING OPEN CV.” is the bonafide work of those who carried out the project
work under my supervision. To the best of my knowledge, the work reported herein does not
form part of any other thesis or dissertation based on which a degree or award was conferred on
an earlier occasion for this or any other candidate.

SIGNATURE SIGNATURE

Dr. M. MANIKANDAN Dr. R. ANNIE UTHRA


Guide, HEAD OF THE DEPARTMENT
Assistant Professor, Professor & Head,
Dept. of Computational Intelligence Dept. of Computational Intelligence
School of Computing, School of Computing,
SRM Institute of Science and SRM Institute of Science and
Technology,Kattankulathur, Technology,Kattankulathur,
Tamil Nadu-603 203 Tamil Nadu-603 203

Signature of the Internal Examiner Signature of the External Examiner

II
Department of Computational Intelligence
SRM Institute of Science & Technology
own work Declaration Form
Degree/ Course: B. Tech in Computer Science and Engineering with specialization in Artificial
Intelligence and Machine learning

Student Name: V. Kamalesh Kumar, Arjun Reddy Patil

Registration Number: RA2011026010198, RA2011026010224

Title of Work: EYEBALL CURSOR MOVEMENT USING OPEN CV


We certify that this assessment compiles with the University’s Rules and Regulations relating
Academic misconduct and plagiarism, as listed in the University Website, Regulations, and the
Education Committee guidelines. We confirm that all the work contained in this assessment is my / our
own except where indicated and that we have met the following conditions:
• References/lists all sources as appropriate

• Referenced and put in inverted commas all quoted text (from books, web, etc)
• Given the sources of all pictures, data, etc. that are not my own

• Not made any use of the report(s) or essay(s) of any other student(s) either past or present
• Acknowledged in appropriate places any help that I have received from others (e.g. fellow students,
technicians, statisticians, external sources)
• Compiled with any other plagiarism criteria specified in the Course handbook / University website
I understand that any false claim for this work will be penalized following the University policies and regulations

DECLARATION:

I am aware of and understand the University’s policy on Academic misconduct and plagiarism and
I certify that this assessment is my / our own work, except where indicated by referring, and that I
have followed the good academic practices noted above.
STUDENT 1 SIGNATURE:
STUDENT 2 SIGNATURE:
DATE:

III
ACKNOWLEDGEMENT
We express our humble gratitude to Dr C. Muthamizhchelvan, Vice-Chancellor, SRM Institute of
Science and Technology, for the facilities extended for the project work and his continued support.We
extend our sincere thanks to Dean-CET, SRM Institute of Science and Technology, Dr
T.V. Gopal, for his invaluable support.
We wish to thank Dr Revathi Venkataraman, Professor & Chairperson, School of Computing, SRM
Institute of Science and Technology, for her support throughout the project work.
We are incredibly grateful to our Head of the Department, Dr. R. Annie Uthra Professor Department
of Computational Intelligence, SRM Institute of Science and Technology, for her suggestions and
encouragement at all the stages of the project work
We want to convey our thanks to our program coordinator, and panel head, Dr. D. ANITHA Assistant
Professor. Department of Computational Intelligence, SRM Institute of Science and Technology, for
her suggestions and encouragement at all the stages of the project work
We register our immeasurable thanks to our Faculty Advisor, DR. Kaavya Kanagaraj Assistant
Professor, Department of Computational Intelligence, SRM Institute of Science and Technology, for her
suggestions and encouragement at all the stages of the project work
Our inexpressible respect and thanks to my guide, DR. M. MANIKANDAN Assistant Professor,
Department of Computational Intelligence, SRM Institute of Science and Technology, for her
suggestions and encouragement at all the stages of the project work at SRM Institute of Science and
Technology, for providing me with an opportunity to pursue my project under his mentorship. They
provided me with the freedom and support to explore the research topics of my interest. Their passion
for solving problems and making a difference in the world has always been inspiring.
We sincerely thank the Data Science and Business Systems staff and students, at SRM Institute of
Science and Technology, for their help during our project. Finally, we would like to thank parents,
family members, and friends for their unconditional love, constant support, and encouragement.

VUPPALA KAMALESH KUMAR (RA2011026010198)


ARJUN REDDY PATIL (RA2011026010224)

IV
ABSTRACT

An individual Human computer interference system is being introduced. In olden


times, as an input device the mouse and keyboard were used by human computer
interference system. Those people who are suffering from certain disease or illness
cannot be able to operate computers. The idea of controlling the computers with the
eyes will serve a great use for handicapped and disabled person. Also this type of
control will eliminate the help required by other person to handle the computer. This
measure will be the most useful for the person who is without hands through which
they can operate with the help of their eye movements. The movement of the cursor
is directly associated with the center of the pupil. Hence our first step would be
detecting the center of point pupil. This process of pupil detection is implemented
using the Raspberry Pi and OpenCV. The Raspberry Pi has a SD/MMC card slot
which is used for placing the SD card. The SD card is boosted with the operating
system that is required starting up of Raspberry Pi. The Raspberry PI will get
executed once the application program is loaded into it.

V
TABLE OF CONTENTS

Chapter No. Title Page No.


ABSTRACT V
TABLE OF CONTENTS VI
LIST OF FIGURES VII
ABBREVIATIONS X

1 INTRODUCTION

1.1 PROBLEM STATEMENT 1


1.2 OBJECTIVE 1
1.3 SCOPE AND APPLICATION OF PROJECT 2
1.4 MOTIVATION 2
1.5 HARDWARE REQUIREMENTS 3
1.6 SOFTWARE REQUIREMENTS 3
1.7 FUNCTIONAL REQUIRMENTS 4

2 LITERATURE SURVEY 6

3 ARCHITECTURE AND DESIGN 8

3.1 MODEL ARCHITECTURE 8

3.2 MODULES 9

4 METHODOLOGY 11

4.1 UML DIAGRAM 11

4.1.1 USE CASE DAIGRAMS 11

4.1.2 SEQUENCE DAIGRAM 12

4.1.3 ACTIVITY DAIGRAM 13

VI
5 RESULTS AND DISCUSSION 14

5.1 TEST RESULTS 14

6 CONCLUSION AND FUTURE ENHANCEMENTS 16

REFERENCES 17

APPENDIX A 18

APPENDIX B 62

APPENDIX C 63

VII
LIST OF FIGURES

FIGURE NO NAME PAGE NO

3.1 ARCHITECTURE DIAGRAM 8

4.1.1 USE CASE DAIGARM 11

4.1.2 SEQUENCE DAIGRAM 12

4.1.3 ACTIVITY DAIGRAM 13

A.1 DECTECTING EYE REGION 60

A.2 DETECTING LEFT SIDE EYE 61

B.1 IEEE 2024 ACCEPTANCE 62

VIII
ABBREVIATIONS

AI Artificial intelligence

BMI Body Metabolism index

SVM Support Vector Machine

FED Feature Engineering and Design

ML Machine learning

R&D Research and development

UX User Experience

API Application programming interface

X
CHAPTER 1

INTRODUCTION

As computer technologies are growing rapidly, the importance of human-computer interaction becomes
highly notable. Some persons who are disabled cannot be able to use the computers. Eyeball movement
control is mainly used for disabled people. Incorporating this eye-controlling system with the computers
will make them work without the help of other individuals. Human-computer interface (HCI) is focused
on the use of computer technology to provide an interface between the computer and the human. There is
a need to find suitable technology that makes effective communication between humans and computers.
Human-computer interaction plays an important role. Thus there is a need to find a method that spreads
an alternate way for making communication between the human and computer to the individuals who
have impairments and give them an equivalent space to be an element of the Information Society [1-5].
In recent years, human-computer interfaces have attracting the attention of various researchers across the
globe. The human computer interface is an implementation of the vision-based system for eye movement
detection for disabled people.

1.1 PROBLEM STATEMENT:


In the proposed system, we have included face detection, face tracking, eye detection and interpretation
of a sequence of eye blinks in real time for controlling a non-intrusive human-computer interface. The
conventional method of interaction with the computer with the mouse is replaced with the human eye
movements. This technique will help the paralyzed person, physically challenged people especially
people without hands to compute efficiently and with the ease of use. Firstly, the camera captures the
image and focuses on the eye in the image using OpenCV code for pupil detection. This results in the
center position of the human eye (pupil). Then the center position of the pupil is taken as a reference and
based on that the human or the user will control the cursor by moving left and right [6-9]. This paper's
organization is described as follows. Section II describes existing solutions to find the cursor movement
using some 3D models. In Section III we present how the cursor is working based only on Eyeball
movement using OpenCV methodology. Eyeball-based cursor movement control (EBCC) is a user-
friendly input method that enables users to move the cursor using their eyes alone. It can significantly
improve the accessibility of computers and related technologies for individuals with motor impairments.

X
keyboard. In the age of the Internet, where most interactions take place through computer
screens, there is a need for efficient and user-friendly interfaces. Eyeball-based Cursor
Movement Control (ECMC) aims to bridge this gap by allowing users to control cursor
movement on a computer screen using their eyeballs alone. This innovative approach could
revolutionize the way we interact with computer screens and pave the way for a more
immersive and intuitive digital experience. To accomplish this, we can employ machine
learning algorithms that can accurately track and predict eye movements. By training these
algorithms on large datasets of eye movement data, we can achieve high levels of accuracy
and precision in controlling cursor movement.

1.2 OBJECTIVE:
For example, a popular approach to implementing ECMC is the use of the Tobii Pro X2- 30
eye tracker, which provides accurate and real-time eye-tracking data. This data can then be
processed by a machine learning algorithm that learns to map eye movements onto the
corresponding cursor movements on the screen. Furthermore, the use of facial recognition
and computer vision techniques could also contribute to the development of ECMC. By
analyzing the user’s facial expressions and detecting changes in facial muscle activity, the
system can accurately determine the user’s intended cursor movement. To ensure a smooth
and seamless user experience, we will need to consider factors such as eye fatigue, eyestrain,
and user comfort. To address these issues, we can incorporate features like automatic pauses,
adjustable tracking speeds, and personalized calibration processes to ensure optimal user
comfort and satisfaction. In conclusion, the development of Eyeball-based Cursor Movement
Control presents a significant opportunity to revolutionize the way we interact with computer
screens. By leveraging the power of machine learning algorithms, facial recognition, and
computer vision techniques, we can create an intuitive and efficient system that enhances.

1
1.3 SCOPE AND APPLICATION OF PROJECT:
The goal is to develop software that can track the user's eye movements through a camera feed,
detect the direction in which the eyes are looking, and translate those movements into real-time
cursor movements on a computer screen. This technology could be beneficial for individuals with
physical disabilities or limited mobility, providing them with an alternative and accessible way to
interact with a computer interface. The key challenges involved in this project include accurately
detecting and tracking the eyes in a video stream, interpreting the direction of the gaze, and
mapping these movements to control the cursor smoothly and precisely on the screen. The system
should also be responsive, reliable, and user-friendly to provide an efficient and comfortable user
experience

1.4 MOTIVATION:

This technology can be incredibly useful for individuals with physical disabilities or limitations
that prevent them from using traditional input devices like a mouse or keyboard. By allowing them
to control the cursor with their eye movements, it offers a more accessible and intuitive way to
interact with computers or assistive devices. Just as our eyes move to track and focus on different
objects, the motivation behind eyeball cursor movement using OpenCV is to create a system that
mimics this natural human behavior. By using computer vision techniques provided by OpenCV,
the goal is to develop a system that can detect and track the movement of a person's eyes and
translate that into controlling the movement of a cursor on a screen.
therefore, lies in enhancing accessibility and usability for those who may face challenges in using
conventional input methods, offering them a means to navigate and interact with technology more
seamlessly using the natural movement of their eyes.

2
1.5 HARDWARE REQUIREMENTS SPECIFICATION
➢ Processor - Pentium –IV
➢ RAM - 8 GB (min)
➢ Hard Disk - 512 GB
➢ Key Board - Standard Windows Keyboard
➢ Mouse - Two or Three-Button Mouse
➢ Monitor - SVGA

1.6 SOFTWARE REQUIREMENTS SPECIFICATION

➢ Operating system - Windows 7 Ultimate.


➢ Coding Language - Python.
➢ Front-End - Python.
➢ Back-End - Django-ORM
➢ Designing - HTML, CSS, javascript.
➢ Data Base - MySQL (WAMP Server).

3
1.7 FUNCTIONAL REQUIREMENTS

The Functional requirements for a system describe the functionality or the services that the
system is expected to provide. These are the statements of services the system should provide
and how the system should react to particular inputs and how the system should behave in
particular situation.
User Registration: Users register with their Registration details. User Login: Users log in to
their account using a password
Live Inputs: Inputs Given By the User Requirement.
Load Model: Trained or Tested Model will be loaded. Predict Output: Output will be
predicted based on parameters.

1. DATA COLLECTION:
Gather a diverse and comprehensive dataset of skin images, including both benign and
malignant melanomas. Ensure the dataset is balanced to avoid biases in the model.

2. DATA PREPROCESSING:
Resize and standardize images to a consistent format. Perform data augmentation techniques
to increase the diversity of the dataset. Normalize pixel values to ensure consistent input to
the model. The collected skin cancer images will undergo preprocessing to improve image
quality and ensure consistency across the dataset. Common pre-processing techniques such as
image resizing, color normalization, and noise reduction will be applied to prepare the images
for analysis.

4
3. TRAINING AND TESTING:

The dataset is divided into three subsets: training, validation, and testing sets. The selected
model is trained using the training set, with hyperparameters optimized as required. To
prevent overfitting, the model's performance is assessed on the validation set. Following
training and validation, the model's effectiveness is evaluated on the testing set, utilizing
metrics like accuracy, precision, recall, and F1 score. Additionally, the confusion matrix is
analyzed to gain insights into the model's performance concerning benign and malignant
cases, providing a comprehensive understanding of its capabilities.

4. MODELING:
In deep learning, a computer model learns to perform classification tasks directly from
images, text, or sound. Deep learning models can achieve state-of-the-art accuracy,
sometimes exceeding human-level performance.

5. PREDICTING:

Prediction refers to the output of an algorithm after it has been trained on a historical dataset
and applied to new data when forecasting the likelihood of a particular outcome.

5
CHAPTER 2

LITERATURE SURVEY

The basic actions of a mouse are mouse click and mouse movement. The advanced technology replaces
this mouse movement by eye motion with the help of an OpenCV. The mouse button click is implemented
by any of the facial expressions such as blinking eyes, opening mouth, and head movement. This model
introduces a novel camera mouse driven by a 3D model based bias face-tracking technique. In personal
computer(PC) due to the standard configuration, it achieves human-machine interaction through faster
visual face tracking and provides a feasible solution to hand-free control. The face tracker used here is
based on the 3D model to control the mouse and carry out mouse operations. Gaze estimation can be used
in Head-mounted display (HMD) environments since they can afford important natural computer
interface cues. This new gaze estimation is based on 3D analysis of the human eye. There are various
commercial products that use gaze detection technology. In this method, the user has to point only one
point for calibration it will then estimate the gaze points. The facial features such as eyes and nose tips
are recognized and tracked to avoid the traditional mouse movements with the human face for human
interaction with the computer. This method can be applied to face scales in a wide range. Six-Segmented
Rectangular (SSR) filter and support vector machine are used for fast extraction of face candidates and
face verification respectively. This comprises our basic strategy for detection. Using JAVA(J2ME) for
face candidate detection, the scale adaptive face detection and tracking system are implemented to
perform left/right mouse click events when the left/right eye blinks. A camera mouse has been used for
disabled people to make an interaction with the computer. The camera mouse is used to change all roles
of traditional mouse and keyboard actions. The proposed system can give all mouse click events and
keyboard functions. In this method, the camera mouse system along with the timer acts as left click event
and blinking as right click event. The real-time eye-gaze estimation system is used for eye-controlled
mice to assist the disabled. This system based on the methodology in which a general low-resolution
webcam is used, but detects the eyes and track gaze accurately in less expense and without specific
equipment. PIR sensor is specifically used for human movement detection. This paper introduces a novel
camera mouse driven by visual face tracking based on a 3D mode. This camera has a standard
configuration for PCs with increased computation speed and also provides a feasible solution to hands-
free Control through visual face tracking. Human facial expressions can be classified as rigid motion and

6
nonrigid motions. The rigid motions are rotation and translation whereas the non-rigid motions are
opening, closing, and stretching of the mouth.

Firstly, we use a virtual eyeball model which is based on the 3D characteristics of the human eyeball.
By using a camera and three collimated IR-LEDs secondly, we will be calculating the 3D position of the
virtual eyeball and gaze vector. Thirdly the calculation of 3D eye position and gaze position on an HMD
monitor is allowed. This used simplified complex 3D converting calculations that have three reference
frames (the camera, the monitor, and the eye reference frames). Fourth, based on kappa compensation, a
simple user-dependent calibration method was proposed by gazing at one position. In our work, we are
trying to compensate for the needs of people who have had disabilities and could not be able to use
computer resources without other individual’s help. Our application mainly uses facial features to interact
with the computer hence there will be no need for hands to operate the mouse. Paralysis is a special case,
in which the loss of muscle functions in part of your body. It happens when something goes wrong with
the way messages pass between your brain and muscles. When such a thing happens, the person’s ability
to control movement is limited to the muscles around the eyes. Blinking and movement of eyes is the
only way of communication for them. To such communication defects, the assistance given is often
intrusive that is it requires special hardware or a device. The alternate way for interfacing is through a
non-intrusive communication system such as Eye Keys which work without special lightning. The eye
direction is detected if the person looks at the camera, which can be used to control various applications.

7
CHAPTER 3

EYE BALL CURSOR MOVEMENT ARCHITECTURE


AND DESIGN

3.1 MODEL ARCHITECTURE:

Currently, most users interact with computers through traditional mouse and keyboard inputs. The visual
representation of the cursor is crucial for understanding the interface and guiding users’ interactions.
However, there are some challenges associated with this method, such as poor tracking accuracy and
potential discomfort due to the repetitive nature of cursor movement. The mouse cursor relies on a
system called eye-movement control, which tracks the movement of a user’s eyes and predicts the user’s
gaze direction. The system then calculates the mouse cursor position on the screen based on the predicted
gaze direction. This positioning of the cursor enables the user to interact with the computer screen
without touching the touchpad.

Fig-3.1-Architecture Diagram

8
3.2 MODULES:
• Data collection
• Data pre-processing
• Feature Selection
• Feature Extraction
• Machine Learning
• Model Selection

3.2.1 Data Collection:


The dataset consists of 4 types of diseased and healthy leaves of Apple, Grapes, Cherry, Corn or Maize.
This dataset is collected on-line platform such as public collection Kaggle. The name of the dataset is
plant disease, where train and train dataset is classified into 5 classes namely bacteria, fungi, virus,
nematodes and normal. This data set contains total of 600 images, where each class consist of 50 images.
The images are in JPG format.

3.2.2 Data Pre-Processing:


Region segmentation When working on image classification, the usual preprocessing steps include
scaling images to the same dimensions, removal of the background and artifacts. Since the Plant Village
dataset includes already segmented and scaled images, these steps were not needed in our case. We
preprocessed these images by further segmenting them in order to extract potentially infected leaf areas,
which has been done by removing all pixels whose green channel value exceeded those of red and blue
channels.

3.2.3 Feature Selection:


The goal of feature selection techniques in machine learning is to find the best set of features that allows
one to build optimized models of studied phenomena. The techniques for feature selection in machine
learning can be broadly classified into the following categories: Supervised Techniques: These techniques
can be used for labeled data and to identify the relevant features for increasing the efficiency of supervised

models like classification and regression. For Example- linear regression, decision tree, SVM, etc.

9
Unsupervised Techniques: These techniques can be used for unlabeled data. For Example- K-Means
Clustering, Principal Component Analysis, Hierarchical Clustering, etc. From a taxonomic point of view,
these techniques are classified into filter, wrapper, embedded, and hybrid methods.

3.2.4 Feature Extraction:


Feature extraction is a part of the dimensionality reduction process, in which, an initial set of the raw data
is divided and reduced to more manageable groups. So when you want to process it will be easier. The
most important characteristic of these large data sets is that they have a large number of variables. These
variables require a lot of computing resources to process. So Feature extraction helps to get the best
feature from those big data sets by selecting and combining variables into features, thus, effectively
reducing the amount of data. These features are easy to process, but still able to describe the actual data
set with accuracy and originality. Color features are obtained by extracting statistical features from image
histograms. They are used to provide a general description of color statistics in the image.

3.2.5 Machine Learning:


Machine learning is like teaching computers to learn from examples and experiences, just like how
humans learn from their mistakes and successes. Instead of giving the computer explicit instructions on
what to do, we provide it with data and let it learn patterns and make decisions on its own. It's like showing
a child pictures of different animals and letting them figure out which ones are cats and which ones are
dogs by themselves. Over time, the computer gets better at making accurate predictions or decisions based
on the information it has learned.

3.2.6 Model Selection:


Model selection is like choosing the right tool for a job. Imagine you have a bunch of different tools, each
designed to do a specific task. Similarly, in machine learning, some different algorithms or models can
be used to solve a problem. Model selection is the process of picking the best model for a particular task
or dataset. Just like you wouldn't use a hammer to tighten a screw, you wouldn't use a model that's not
suited for your data. You try out different models, see how well they perform on your data, and then pick
the one that gives you the best results. It's all about finding the right fit for the job at hand.

10
CHAPTER 4

METHODOLOGY

4.1 UML DIAGRAMS

4.1.1 USE CASE DIAGRAM


A use case diagram in the Unified Modeling Language (UML) is a type of behavioral diagram
defined by and created from a Use-case analysis. Its purpose is to present a graphical
overview of the functionality provided by a system in terms of actors, their goals (represented
as use cases), and any dependencies between those use cases. The main purpose of a use case
diagram is to show what system functions are performed for which actor. Roles of the actors
in the system can be depicted.

Fig.no-4.1.1- Use Case Diagram

11
4.1.2 SEQUENCE DIAGRAM:
A sequence diagram in Unified Modeling Language (UML) is a kind of interaction diagram
that shows how processes operate with one another and in what order. It is a construct of a
Message Sequence Chart.

Fig.no-4.1.2-Sequence Diagram

12
4.1.3 ACTIVITY DIAGRAM:
Activity diagrams are graphical representations of work flows of stepwise activities and
actions with support for choice, iteration and concurrency. In the Unified Modeling
Language, activity diagrams can be used to describe the business and operational step-
by-step work flows of components in a system. An activity diagram shows the overall
flow of control.

Fig.no-4.1.3-Activity Diagram

13
CHAPTER 5

RESULTS AND DISCUSSIONS

5.1 TEST RESULTS

5.1.1 ACCURACY:
Accuracy is one metric for evaluating classification models. Informally, accuracy is the
fraction of predictions our model got right.
● Accuracy formula:

5.1.2 PRECISION:
Precision is one indicator of a machine learning model's performance – the quality of a
positive prediction made by the model. Precision refers to the number of true positives divided
by the total number of positive predictions.

● Precision formula:
Precision = True Positive/(True Positive + False Positive)
Precision = TP/TP+FP

5.1.3 RECALL:
Recall, also known as the true positive rate (TPR), is the percentage of data samples that a
machine learning model correctly identifies as belonging to a class of interest,the “positive
class”,out of the total samples for that class.

● Recall Formula:
Recall = True Positive/True Positive + False Negative

14
Recall = TP/TP+FN

● Recall of a machine learning model will be low when the value of;
TP+FN (denominator) > TP (Numerator)

● Recall of machine learning model will be high when Value of;


TP (Numerator) > TP+FN (denominator)

Unlike Precision, Recall is independent of the number of negative sample classifications.


Further, if the model classifies all positive samples as positive, then Recall will be 1.

15
CHAPTER 6

CONCLUSION AND FUTURE ENHANCEMENTS

From the process implemented it is cleared that the cursor can be controlled by the eyeball
movement i.e., without using hands on the computers. This will be helpful for the people
having disability in using the physical parts of the computers to control the cursor points. , we
have included face detection, face tracking, eye detection and interpretation of a sequence of
eye blinks in real time for controlling a non- intrusive human-computer interface. The
conventional method of interaction with the computer with the mouse is replaced with the
human eye movements. This technique will help the paralyzed person, physically challenged
people especially people without hands to compute efficiently and with the ease of use. Firstly,
the camera captures the image and focuses on the eye in the image using OpenCV code for
pupil detection. This results in the center position of the human eye (pupil). Then the center
position of the pupil is taken as a reference and based on that the human or the user will control
the cursor by moving left and right [6-9]. This paper's organization is described as follows.
Section II describes existing solutions to find the cursor movement using some 3D models. In
Section III we present how the cursor is working based only on Eyeball movement using
OpenCV methodology Because the cursor points can be operated by moving the eyeballs.
Without the help of others disabled people can use the computers. This technology can be
enhanced in the future by inventing more techniques like clicking events as well as to do all
the mouse movements and also for human interface systems using eye blinks. Technology
also extended to the eyeball movement and eye blinking to get the efficient and accurate
movement

16
REFERENCES

[1] Jilin Tu, Thomas Huang, Elect and Comp EngrDept, Hai Tao, ElectEnggDept, “Face as Mouse
through Visual Face Tracking” ,IEEE,(2005).

[2] EniChul Lee Kang Ryoung Park “A robust eye gaze tracking method based on a virtual eyeball
model”, Springer, pp.319-337, Apr (2008).

[3] John J. Magee, MargritBetke, James Gips, Matthew R. Scott, and Benjamin N.Waber“A Human-
Computer Interface Using Symmetry Between Eyes to Detect Gaze Direction” IEEE Trans, Vol. 38,
no.6,pp.1248-1259, Nov (2008).

[4] SunitaBarve, Dhaval Dholakiya, Shashank Gupta, DhananjayDhatrak, “Facial Feature Based Method
For Real Time Face Detection and Tracking I-CURSOR”, International Journal of EnggResearchand
App., Vol. 2, pp. 1406-1410, Apr (2012).

[5] Yu-Tzu Lin Ruei-Yan Lin Yu-Chih Lin Greg C Lee “Real-time eye-gaze estimation using a low-
resolution webcam”, Springer, pp.543-568, Aug (2012).

[6] Samuel Epstein-Eric MissimerMargritBetke “Using Kernels for a video- based mouse- replacement
interface”, Springer link, Nov (2012)

[7] Hossain, Zakir, Md Maruf Hossain Shuvo, and Prionjit Sarker. Hardware and software implementation
of real time electrooculogram (EOG) acquisition system to control computer cursor with eyeball
movement." In 2017 4th International Conference on Advances in Electrical Engineering (ICAEE), pp.
132-137. IEEE, 2017.

[8] Lee, Jun-Seok, Kyung-hwa Yu, Sang-won Leigh, Jin-Yong Chung, and Sung-Goo Cho. "Method for
controlling device on the basis of eyeball motion, and device therefore." U.S. Patent 9,864,429, issued
January 9, 2018.

[9] Lee, Po-Lei, Jyun-Jie Sie, Yu-Ju Liu, Chi-Hsun Wu, Ming-Huan Lee, Chih-Hung Shu, Po-Hung Li,
Chia- Wei Sun, and Kuo-Kai Shyu. "An SSVEP- actuated brain computer interface using phase-
tagged flickering sequences: a cursor system." Annals of biomedical engineering 38, no. 7 (2010):
2383- 2397

17
APPENDIX A

SOURCE CODE AND SCREEN SHOTS OF MODULES

import math
class
importEye(object):
numpy as np
""" cv2
import
fromThis class
.pupil creates
import a new frame to isolate the eye and
Pupil
initiates the pupil detection.
"""

LEFT_EYE_POINTS = [36, 37, 38, 39, 40, 41]


RIGHT_EYE_POINTS = [42, 43, 44, 45, 46, 47]

def init (self, original_frame, landmarks, side, calibration):


self.frame = None
self.origin = None
self.center = None
self.pupil = None

self._analyze(original_frame, landmarks, side, calibration)


@staticmethod
def _middle_point(p1, p2):
"""Returns the middle point (x,y) between two points

Arguments:
p1 (dlib.point): First point
p2 (dlib.point): Second point
"""
x = int((p1.x + p2.x) / 2)
y = int((p1.y + p2.y) / 2)
return (x, y)

def _isolate(self, frame, landmarks, points):


"""Isolate an eye, to have a frame without other part of the face.
Arguments:
frame (numpy.ndarray): Frame containing the face
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

18
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)

# Applying a mask to get only the eye


height, width = frame.shape[:2]
black_frame = np.zeros((height, width), np.uint8)
mask = np.full((height, width), 255, np.uint8)
cv2.fillPoly(mask, [region], (0, 0, 0))
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye


margin = 5
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
max_y = np.max(region[:, 1]) + margin
self.frame = eye[min_y:max_y, min_x:max_x]
self.origin = (min_x, min_y)

height, width = self.frame.shape[:2]


self.center = (width / 2, height / 2)

def _blinking_ratio(self, landmarks, points):


"""Calculates a ratio that can indicate whether an eye is closed or not.
It's the division of the width of the eye, by its height.

Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))

eye_width = math.hypot((left[0] - right[0]), (left[1] - right[1]))


eye_height = math.hypot((top[0] - bottom[0]), (top[1] - bottom[1]))

19
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)

# Applying a mask to get only the eye


height, width = frame.shape[:2]
black_frame = np.zeros((height, width), np.uint8)
mask = np.full((height, width), 255, np.uint8)
cv2.fillPoly(mask, [region], (0, 0, 0))
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye


margin = 5
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
max_y = np.max(region[:, 1]) + margin
self.frame = eye[min_y:max_y, min_x:max_x]
self.origin = (min_x, min_y)

height, width = self.frame.shape[:2]


self.center = (width / 2, height / 2)

def _blinking_ratio(self, landmarks, points):


"""Calculates a ratio that can indicate whether an eye is closed or not.
It's the division of the width of the eye, by its height.

Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))

eye_width = math.hypot((left[0] - right[0]), (left[1] - right[1]))


eye_height = math.hypot((top[0] - bottom[0]), (top[1] - bottom[1]))

20
try:
ratio = eye_width / eye_height
except ZeroDivisionError:
ratio = None

return ratio

def _analyze(self, original_frame, landmarks, side, calibration):


"""Detects and isolates the eye in a new frame, sends data to the calibration
and initializes Pupil object.

Arguments:
original_frame (numpy.ndarray): Frame passed by the user
landmarks (dlib.full_object_detection): Facial landmarks for the face region
side: Indicates whether it's the left eye (0) or the right eye (1)
calibration (calibration.Calibration): Manages the binarization threshold value
"""
if side == 0:
points = self.LEFT_EYE_POINTS
elif side == 1:
points = self.RIGHT_EYE_POINTS
else:
return

self.blinking = self._blinking_ratio(landmarks, points)


self._isolate(original_frame, landmarks, points)

if not calibration.is_complete():
calibration.evaluate(self.frame, side)

threshold = calibration.threshold(side)
self.pupil = Pupil(self.frame, threshold)

from future import division


import os
import cv2
import dlib
from .eye import Eye
from .calibration import Calibration

class GazeTracking(object):
"""
This class tracks the user's gaze.
It provides useful information like the position of the eyes
and pupils and allows to know if the eyes are open or closed

21
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)

# Applying a mask to get only the eye


height, width = frame.shape[:2]
black_frame = np.zeros((height, width), np.uint8)
mask = np.full((height, width), 255, np.uint8)
cv2.fillPoly(mask, [region], (0, 0, 0))
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye


margin = 5
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
max_y = np.max(region[:, 1]) + margin
self.frame = eye[min_y:max_y, min_x:max_x]
self.origin = (min_x, min_y)

height, width = self.frame.shape[:2]


self.center = (width / 2, height / 2)

def _blinking_ratio(self, landmarks, points):


"""Calculates a ratio that can indicate whether an eye is closed or not.
It's the division of the width of the eye, by its height.

Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))

eye_width = math.hypot((left[0] - right[0]), (left[1] - right[1]))


eye_height = math.hypot((top[0] - bottom[0]), (top[1] - bottom[1]))

22
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)

# Applying a mask to get only the eye


height, width = frame.shape[:2]
black_frame = np.zeros((height, width), np.uint8)
mask = np.full((height, width), 255, np.uint8)
cv2.fillPoly(mask, [region], (0, 0, 0))
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye


margin = 5
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
max_y = np.max(region[:, 1]) + margin
self.frame = eye[min_y:max_y, min_x:max_x]
self.origin = (min_x, min_y)

height, width = self.frame.shape[:2]


self.center = (width / 2, height / 2)

def _blinking_ratio(self, landmarks, points):


"""Calculates a ratio that can indicate whether an eye is closed or not.
It's the division of the width of the eye, by its height.

Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))

eye_width = math.hypot((left[0] - right[0]), (left[1] - right[1]))


eye_height = math.hypot((top[0] - bottom[0]), (top[1] - bottom[1]))

23
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)

# Applying a mask to get only the eye


height, width = frame.shape[:2]
black_frame = np.zeros((height, width), np.uint8)
mask = np.full((height, width), 255, np.uint8)
cv2.fillPoly(mask, [region], (0, 0, 0))
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye


margin = 5
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
max_y = np.max(region[:, 1]) + margin
self.frame = eye[min_y:max_y, min_x:max_x]
self.origin = (min_x, min_y)

height, width = self.frame.shape[:2]


self.center = (width / 2, height / 2)

def _blinking_ratio(self, landmarks, points):


"""Calculates a ratio that can indicate whether an eye is closed or not.
It's the division of the width of the eye, by its height.

Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))

eye_width = math.hypot((left[0] - right[0]), (left[1] - right[1]))


eye_height = math.hypot((top[0] - bottom[0]), (top[1] - bottom[1]))

24
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)

# Applying a mask to get only the eye


height, width = frame.shape[:2]
black_frame = np.zeros((height, width), np.uint8)
mask = np.full((height, width), 255, np.uint8)
cv2.fillPoly(mask, [region], (0, 0, 0))
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye


margin = 5
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
max_y = np.max(region[:, 1]) + margin
self.frame = eye[min_y:max_y, min_x:max_x]
self.origin = (min_x, min_y)

height, width = self.frame.shape[:2]


self.center = (width / 2, height / 2)

def _blinking_ratio(self, landmarks, points):


"""Calculates a ratio that can indicate whether an eye is closed or not.
It's the division of the width of the eye, by its height.

Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))

eye_width = math.hypot((left[0] - right[0]), (left[1] - right[1]))


eye_height = math.hypot((top[0] - bottom[0]), (top[1] - bottom[1]))

25
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)

# Applying a mask to get only the eye


height, width = frame.shape[:2]
black_frame = np.zeros((height, width), np.uint8)
mask = np.full((height, width), 255, np.uint8)
cv2.fillPoly(mask, [region], (0, 0, 0))
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye


margin = 5
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
max_y = np.max(region[:, 1]) + margin
self.frame = eye[min_y:max_y, min_x:max_x]
self.origin = (min_x, min_y)

height, width = self.frame.shape[:2]


self.center = (width / 2, height / 2)

def _blinking_ratio(self, landmarks, points):


"""Calculates a ratio that can indicate whether an eye is closed or not.
It's the division of the width of the eye, by its height.

Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))

eye_width = math.hypot((left[0] - right[0]), (left[1] - right[1]))


eye_height = math.hypot((top[0] - bottom[0]), (top[1] - bottom[1]))

26
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)

# Applying a mask to get only the eye


height, width = frame.shape[:2]
black_frame = np.zeros((height, width), np.uint8)
mask = np.full((height, width), 255, np.uint8)
cv2.fillPoly(mask, [region], (0, 0, 0))
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye


margin = 5
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
max_y = np.max(region[:, 1]) + margin
self.frame = eye[min_y:max_y, min_x:max_x]
self.origin = (min_x, min_y)

height, width = self.frame.shape[:2]


self.center = (width / 2, height / 2)

def _blinking_ratio(self, landmarks, points):


"""Calculates a ratio that can indicate whether an eye is closed or not.
It's the division of the width of the eye, by its height.

Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))

eye_width = math.hypot((left[0] - right[0]), (left[1] - right[1]))


eye_height = math.hypot((top[0] - bottom[0]), (top[1] - bottom[1]))

27
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)

# Applying a mask to get only the eye


height, width = frame.shape[:2]
black_frame = np.zeros((height, width), np.uint8)
mask = np.full((height, width), 255, np.uint8)
cv2.fillPoly(mask, [region], (0, 0, 0))
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye


margin = 5
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
max_y = np.max(region[:, 1]) + margin
self.frame = eye[min_y:max_y, min_x:max_x]
self.origin = (min_x, min_y)

height, width = self.frame.shape[:2]


self.center = (width / 2, height / 2)

def _blinking_ratio(self, landmarks, points):


"""Calculates a ratio that can indicate whether an eye is closed or not.
It's the division of the width of the eye, by its height.

Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))

eye_width = math.hypot((left[0] - right[0]), (left[1] - right[1]))


eye_height = math.hypot((top[0] - bottom[0]), (top[1] - bottom[1]))

28
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)

# Applying a mask to get only the eye


height, width = frame.shape[:2]
black_frame = np.zeros((height, width), np.uint8)
mask = np.full((height, width), 255, np.uint8)
cv2.fillPoly(mask, [region], (0, 0, 0))
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye


margin = 5
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
max_y = np.max(region[:, 1]) + margin
self.frame = eye[min_y:max_y, min_x:max_x]
self.origin = (min_x, min_y)

height, width = self.frame.shape[:2]


self.center = (width / 2, height / 2)

def _blinking_ratio(self, landmarks, points):


"""Calculates a ratio that can indicate whether an eye is closed or not.
It's the division of the width of the eye, by its height.

Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))

eye_width = math.hypot((left[0] - right[0]), (left[1] - right[1]))


eye_height = math.hypot((top[0] - bottom[0]), (top[1] - bottom[1]))

29
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)

# Applying a mask to get only the eye


height, width = frame.shape[:2]
black_frame = np.zeros((height, width), np.uint8)
mask = np.full((height, width), 255, np.uint8)
cv2.fillPoly(mask, [region], (0, 0, 0))
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye


margin = 5
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
max_y = np.max(region[:, 1]) + margin
self.frame = eye[min_y:max_y, min_x:max_x]
self.origin = (min_x, min_y)

height, width = self.frame.shape[:2]


self.center = (width / 2, height / 2)

def _blinking_ratio(self, landmarks, points):


"""Calculates a ratio that can indicate whether an eye is closed or not.
It's the division of the width of the eye, by its height.

Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))

eye_width = math.hypot((left[0] - right[0]), (left[1] - right[1]))


eye_height = math.hypot((top[0] - bottom[0]), (top[1] - bottom[1]))

30
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)

# Applying a mask to get only the eye


height, width = frame.shape[:2]
black_frame = np.zeros((height, width), np.uint8)
mask = np.full((height, width), 255, np.uint8)
cv2.fillPoly(mask, [region], (0, 0, 0))
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye


margin = 5
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
max_y = np.max(region[:, 1]) + margin
self.frame = eye[min_y:max_y, min_x:max_x]
self.origin = (min_x, min_y)

height, width = self.frame.shape[:2]


self.center = (width / 2, height / 2)

def _blinking_ratio(self, landmarks, points):


"""Calculates a ratio that can indicate whether an eye is closed or not.
It's the division of the width of the eye, by its height.

Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))

eye_width = math.hypot((left[0] - right[0]), (left[1] - right[1]))


eye_height = math.hypot((top[0] - bottom[0]), (top[1] - bottom[1]))

31
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)

# Applying a mask to get only the eye


height, width = frame.shape[:2]
black_frame = np.zeros((height, width), np.uint8)
mask = np.full((height, width), 255, np.uint8)
cv2.fillPoly(mask, [region], (0, 0, 0))
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye


margin = 5
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
max_y = np.max(region[:, 1]) + margin
self.frame = eye[min_y:max_y, min_x:max_x]
self.origin = (min_x, min_y)

height, width = self.frame.shape[:2]


self.center = (width / 2, height / 2)

def _blinking_ratio(self, landmarks, points):


"""Calculates a ratio that can indicate whether an eye is closed or not.
It's the division of the width of the eye, by its height.

Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))

eye_width = math.hypot((left[0] - right[0]), (left[1] - right[1]))


eye_height = math.hypot((top[0] - bottom[0]), (top[1] - bottom[1]))

32
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)

# Applying a mask to get only the eye


height, width = frame.shape[:2]
black_frame = np.zeros((height, width), np.uint8)
mask = np.full((height, width), 255, np.uint8)
cv2.fillPoly(mask, [region], (0, 0, 0))
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye


margin = 5
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
max_y = np.max(region[:, 1]) + margin
self.frame = eye[min_y:max_y, min_x:max_x]
self.origin = (min_x, min_y)

height, width = self.frame.shape[:2]


self.center = (width / 2, height / 2)

def _blinking_ratio(self, landmarks, points):


"""Calculates a ratio that can indicate whether an eye is closed or not.
It's the division of the width of the eye, by its height.

Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))

eye_width = math.hypot((left[0] - right[0]), (left[1] - right[1]))


eye_height = math.hypot((top[0] - bottom[0]), (top[1] - bottom[1]))

33
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)

# Applying a mask to get only the eye


height, width = frame.shape[:2]
black_frame = np.zeros((height, width), np.uint8)
mask = np.full((height, width), 255, np.uint8)
cv2.fillPoly(mask, [region], (0, 0, 0))
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye


margin = 5
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
max_y = np.max(region[:, 1]) + margin
self.frame = eye[min_y:max_y, min_x:max_x]
self.origin = (min_x, min_y)

height, width = self.frame.shape[:2]


self.center = (width / 2, height / 2)

def _blinking_ratio(self, landmarks, points):


"""Calculates a ratio that can indicate whether an eye is closed or not.
It's the division of the width of the eye, by its height.

Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))

eye_width = math.hypot((left[0] - right[0]), (left[1] - right[1]))


eye_height = math.hypot((top[0] - bottom[0]), (top[1] - bottom[1]))

34
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)

# Applying a mask to get only the eye


height, width = frame.shape[:2]
black_frame = np.zeros((height, width), np.uint8)
mask = np.full((height, width), 255, np.uint8)
cv2.fillPoly(mask, [region], (0, 0, 0))
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye


margin = 5
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
max_y = np.max(region[:, 1]) + margin
self.frame = eye[min_y:max_y, min_x:max_x]
self.origin = (min_x, min_y)

height, width = self.frame.shape[:2]


self.center = (width / 2, height / 2)

def _blinking_ratio(self, landmarks, points):


"""Calculates a ratio that can indicate whether an eye is closed or not.
It's the division of the width of the eye, by its height.

Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))

eye_width = math.hypot((left[0] - right[0]), (left[1] - right[1]))


eye_height = math.hypot((top[0] - bottom[0]), (top[1] - bottom[1]))

35
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)

# Applying a mask to get only the eye


height, width = frame.shape[:2]
black_frame = np.zeros((height, width), np.uint8)
mask = np.full((height, width), 255, np.uint8)
cv2.fillPoly(mask, [region], (0, 0, 0))
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye


margin = 5
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
max_y = np.max(region[:, 1]) + margin
self.frame = eye[min_y:max_y, min_x:max_x]
self.origin = (min_x, min_y)

height, width = self.frame.shape[:2]


self.center = (width / 2, height / 2)

def _blinking_ratio(self, landmarks, points):


"""Calculates a ratio that can indicate whether an eye is closed or not.
It's the division of the width of the eye, by its height.

Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))

eye_width = math.hypot((left[0] - right[0]), (left[1] - right[1]))


eye_height = math.hypot((top[0] - bottom[0]), (top[1] - bottom[1]))

36
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)

# Applying a mask to get only the eye


height, width = frame.shape[:2]
black_frame = np.zeros((height, width), np.uint8)
mask = np.full((height, width), 255, np.uint8)
cv2.fillPoly(mask, [region], (0, 0, 0))
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye


margin = 5
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
max_y = np.max(region[:, 1]) + margin
self.frame = eye[min_y:max_y, min_x:max_x]
self.origin = (min_x, min_y)

height, width = self.frame.shape[:2]


self.center = (width / 2, height / 2)

def _blinking_ratio(self, landmarks, points):


"""Calculates a ratio that can indicate whether an eye is closed or not.
It's the division of the width of the eye, by its height.

Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))

eye_width = math.hypot((left[0] - right[0]), (left[1] - right[1]))


eye_height = math.hypot((top[0] - bottom[0]), (top[1] - bottom[1]))

37
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)

# Applying a mask to get only the eye


height, width = frame.shape[:2]
black_frame = np.zeros((height, width), np.uint8)
mask = np.full((height, width), 255, np.uint8)
cv2.fillPoly(mask, [region], (0, 0, 0))
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye


margin = 5
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
max_y = np.max(region[:, 1]) + margin
self.frame = eye[min_y:max_y, min_x:max_x]
self.origin = (min_x, min_y)

height, width = self.frame.shape[:2]


self.center = (width / 2, height / 2)

def _blinking_ratio(self, landmarks, points):


"""Calculates a ratio that can indicate whether an eye is closed or not.
It's the division of the width of the eye, by its height.

Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))

eye_width = math.hypot((left[0] - right[0]), (left[1] - right[1]))


eye_height = math.hypot((top[0] - bottom[0]), (top[1] - bottom[1]))

38
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)

# Applying a mask to get only the eye


height, width = frame.shape[:2]
black_frame = np.zeros((height, width), np.uint8)
mask = np.full((height, width), 255, np.uint8)
cv2.fillPoly(mask, [region], (0, 0, 0))
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye


margin = 5
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
max_y = np.max(region[:, 1]) + margin
self.frame = eye[min_y:max_y, min_x:max_x]
self.origin = (min_x, min_y)

height, width = self.frame.shape[:2]


self.center = (width / 2, height / 2)

def _blinking_ratio(self, landmarks, points):


"""Calculates a ratio that can indicate whether an eye is closed or not.
It's the division of the width of the eye, by its height.

Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))

eye_width = math.hypot((left[0] - right[0]), (left[1] - right[1]))


eye_height = math.hypot((top[0] - bottom[0]), (top[1] - bottom[1]))

39
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)

# Applying a mask to get only the eye


height, width = frame.shape[:2]
black_frame = np.zeros((height, width), np.uint8)
mask = np.full((height, width), 255, np.uint8)
cv2.fillPoly(mask, [region], (0, 0, 0))
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye


margin = 5
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
max_y = np.max(region[:, 1]) + margin
self.frame = eye[min_y:max_y, min_x:max_x]
self.origin = (min_x, min_y)

height, width = self.frame.shape[:2]


self.center = (width / 2, height / 2)

def _blinking_ratio(self, landmarks, points):


"""Calculates a ratio that can indicate whether an eye is closed or not.
It's the division of the width of the eye, by its height.

Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))

eye_width = math.hypot((left[0] - right[0]), (left[1] - right[1]))


eye_height = math.hypot((top[0] - bottom[0]), (top[1] - bottom[1]))

40
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)

# Applying a mask to get only the eye


height, width = frame.shape[:2]
black_frame = np.zeros((height, width), np.uint8)
mask = np.full((height, width), 255, np.uint8)
cv2.fillPoly(mask, [region], (0, 0, 0))
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye


margin = 5
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
max_y = np.max(region[:, 1]) + margin
self.frame = eye[min_y:max_y, min_x:max_x]
self.origin = (min_x, min_y)

height, width = self.frame.shape[:2]


self.center = (width / 2, height / 2)

def _blinking_ratio(self, landmarks, points):


"""Calculates a ratio that can indicate whether an eye is closed or not.
It's the division of the width of the eye, by its height.

Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))

eye_width = math.hypot((left[0] - right[0]), (left[1] - right[1]))


eye_height = math.hypot((top[0] - bottom[0]), (top[1] - bottom[1]))

41
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)

# Applying a mask to get only the eye


height, width = frame.shape[:2]
black_frame = np.zeros((height, width), np.uint8)
mask = np.full((height, width), 255, np.uint8)
cv2.fillPoly(mask, [region], (0, 0, 0))
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye


margin = 5
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
max_y = np.max(region[:, 1]) + margin
self.frame = eye[min_y:max_y, min_x:max_x]
self.origin = (min_x, min_y)

height, width = self.frame.shape[:2]


self.center = (width / 2, height / 2)

def _blinking_ratio(self, landmarks, points):


"""Calculates a ratio that can indicate whether an eye is closed or not.
It's the division of the width of the eye, by its height.

Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))

eye_width = math.hypot((left[0] - right[0]), (left[1] - right[1]))


eye_height = math.hypot((top[0] - bottom[0]), (top[1] - bottom[1]))

42
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)

# Applying a mask to get only the eye


height, width = frame.shape[:2]
black_frame = np.zeros((height, width), np.uint8)
mask = np.full((height, width), 255, np.uint8)
cv2.fillPoly(mask, [region], (0, 0, 0))
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye


margin = 5
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
max_y = np.max(region[:, 1]) + margin
self.frame = eye[min_y:max_y, min_x:max_x]
self.origin = (min_x, min_y)

height, width = self.frame.shape[:2]


self.center = (width / 2, height / 2)

def _blinking_ratio(self, landmarks, points):


"""Calculates a ratio that can indicate whether an eye is closed or not.
It's the division of the width of the eye, by its height.

Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))

eye_width = math.hypot((left[0] - right[0]), (left[1] - right[1]))


eye_height = math.hypot((top[0] - bottom[0]), (top[1] - bottom[1]))

43
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)

# Applying a mask to get only the eye


height, width = frame.shape[:2]
black_frame = np.zeros((height, width), np.uint8)
mask = np.full((height, width), 255, np.uint8)
cv2.fillPoly(mask, [region], (0, 0, 0))
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye


margin = 5
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
max_y = np.max(region[:, 1]) + margin
self.frame = eye[min_y:max_y, min_x:max_x]
self.origin = (min_x, min_y)

height, width = self.frame.shape[:2]


self.center = (width / 2, height / 2)

def _blinking_ratio(self, landmarks, points):


"""Calculates a ratio that can indicate whether an eye is closed or not.
It's the division of the width of the eye, by its height.

Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))

eye_width = math.hypot((left[0] - right[0]), (left[1] - right[1]))


eye_height = math.hypot((top[0] - bottom[0]), (top[1] - bottom[1]))

44
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)

# Applying a mask to get only the eye


height, width = frame.shape[:2]
black_frame = np.zeros((height, width), np.uint8)
mask = np.full((height, width), 255, np.uint8)
cv2.fillPoly(mask, [region], (0, 0, 0))
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye


margin = 5
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
max_y = np.max(region[:, 1]) + margin
self.frame = eye[min_y:max_y, min_x:max_x]
self.origin = (min_x, min_y)

height, width = self.frame.shape[:2]


self.center = (width / 2, height / 2)

def _blinking_ratio(self, landmarks, points):


"""Calculates a ratio that can indicate whether an eye is closed or not.
It's the division of the width of the eye, by its height.

Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))

eye_width = math.hypot((left[0] - right[0]), (left[1] - right[1]))


eye_height = math.hypot((top[0] - bottom[0]), (top[1] - bottom[1]))

45
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)

# Applying a mask to get only the eye


height, width = frame.shape[:2]
black_frame = np.zeros((height, width), np.uint8)
mask = np.full((height, width), 255, np.uint8)
cv2.fillPoly(mask, [region], (0, 0, 0))
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye


margin = 5
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
max_y = np.max(region[:, 1]) + margin
self.frame = eye[min_y:max_y, min_x:max_x]
self.origin = (min_x, min_y)

height, width = self.frame.shape[:2]


self.center = (width / 2, height / 2)

def _blinking_ratio(self, landmarks, points):


"""Calculates a ratio that can indicate whether an eye is closed or not.
It's the division of the width of the eye, by its height.

Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))

eye_width = math.hypot((left[0] - right[0]), (left[1] - right[1]))


eye_height = math.hypot((top[0] - bottom[0]), (top[1] - bottom[1]))

46
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)

# Applying a mask to get only the eye


height, width = frame.shape[:2]
black_frame = np.zeros((height, width), np.uint8)
mask = np.full((height, width), 255, np.uint8)
cv2.fillPoly(mask, [region], (0, 0, 0))
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye


margin = 5
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
max_y = np.max(region[:, 1]) + margin
self.frame = eye[min_y:max_y, min_x:max_x]
self.origin = (min_x, min_y)

height, width = self.frame.shape[:2]


self.center = (width / 2, height / 2)

def _blinking_ratio(self, landmarks, points):


"""Calculates a ratio that can indicate whether an eye is closed or not.
It's the division of the width of the eye, by its height.

Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))

eye_width = math.hypot((left[0] - right[0]), (left[1] - right[1]))


eye_height = math.hypot((top[0] - bottom[0]), (top[1] - bottom[1]))

47
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)

# Applying a mask to get only the eye


height, width = frame.shape[:2]
black_frame = np.zeros((height, width), np.uint8)
mask = np.full((height, width), 255, np.uint8)
cv2.fillPoly(mask, [region], (0, 0, 0))
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye


margin = 5
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
max_y = np.max(region[:, 1]) + margin
self.frame = eye[min_y:max_y, min_x:max_x]
self.origin = (min_x, min_y)

height, width = self.frame.shape[:2]


self.center = (width / 2, height / 2)

def _blinking_ratio(self, landmarks, points):


"""Calculates a ratio that can indicate whether an eye is closed or not.
It's the division of the width of the eye, by its height.

Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))

eye_width = math.hypot((left[0] - right[0]), (left[1] - right[1]))


eye_height = math.hypot((top[0] - bottom[0]), (top[1] - bottom[1]))

48
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)

# Applying a mask to get only the eye


height, width = frame.shape[:2]
black_frame = np.zeros((height, width), np.uint8)
mask = np.full((height, width), 255, np.uint8)
cv2.fillPoly(mask, [region], (0, 0, 0))
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye


margin = 5
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
max_y = np.max(region[:, 1]) + margin
self.frame = eye[min_y:max_y, min_x:max_x]
self.origin = (min_x, min_y)

height, width = self.frame.shape[:2]


self.center = (width / 2, height / 2)

def _blinking_ratio(self, landmarks, points):


"""Calculates a ratio that can indicate whether an eye is closed or not.
It's the division of the width of the eye, by its height.

Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))

eye_width = math.hypot((left[0] - right[0]), (left[1] - right[1]))


eye_height = math.hypot((top[0] - bottom[0]), (top[1] - bottom[1]))

49
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)

# Applying a mask to get only the eye


height, width = frame.shape[:2]
black_frame = np.zeros((height, width), np.uint8)
mask = np.full((height, width), 255, np.uint8)
cv2.fillPoly(mask, [region], (0, 0, 0))
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye


margin = 5
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
max_y = np.max(region[:, 1]) + margin
self.frame = eye[min_y:max_y, min_x:max_x]
self.origin = (min_x, min_y)

height, width = self.frame.shape[:2]


self.center = (width / 2, height / 2)

def _blinking_ratio(self, landmarks, points):


"""Calculates a ratio that can indicate whether an eye is closed or not.
It's the division of the width of the eye, by its height.

Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))

eye_width = math.hypot((left[0] - right[0]), (left[1] - right[1]))


eye_height = math.hypot((top[0] - bottom[0]), (top[1] - bottom[1]))

50
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)

# Applying a mask to get only the eye


height, width = frame.shape[:2]
black_frame = np.zeros((height, width), np.uint8)
mask = np.full((height, width), 255, np.uint8)
cv2.fillPoly(mask, [region], (0, 0, 0))
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye


margin = 5
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
max_y = np.max(region[:, 1]) + margin
self.frame = eye[min_y:max_y, min_x:max_x]
self.origin = (min_x, min_y)

height, width = self.frame.shape[:2]


self.center = (width / 2, height / 2)

def _blinking_ratio(self, landmarks, points):


"""Calculates a ratio that can indicate whether an eye is closed or not.
It's the division of the width of the eye, by its height.

Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))

eye_width = math.hypot((left[0] - right[0]), (left[1] - right[1]))


eye_height = math.hypot((top[0] - bottom[0]), (top[1] - bottom[1]))

51
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)

# Applying a mask to get only the eye


height, width = frame.shape[:2]
black_frame = np.zeros((height, width), np.uint8)
mask = np.full((height, width), 255, np.uint8)
cv2.fillPoly(mask, [region], (0, 0, 0))
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye


margin = 5
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
max_y = np.max(region[:, 1]) + margin
self.frame = eye[min_y:max_y, min_x:max_x]
self.origin = (min_x, min_y)

height, width = self.frame.shape[:2]


self.center = (width / 2, height / 2)

def _blinking_ratio(self, landmarks, points):


"""Calculates a ratio that can indicate whether an eye is closed or not.
It's the division of the width of the eye, by its height.

Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))

eye_width = math.hypot((left[0] - right[0]), (left[1] - right[1]))


eye_height = math.hypot((top[0] - bottom[0]), (top[1] - bottom[1]))

52
"""
region = np.array([(landmarks.part(point).x, landmarks.part(point).y) for point in
points])
region = region.astype(np.int32)

# Applying a mask to get only the eye


height, width = frame.shape[:2]
black_frame = np.zeros((height, width), np.uint8)
mask = np.full((height, width), 255, np.uint8)
cv2.fillPoly(mask, [region], (0, 0, 0))
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye


margin = 5
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
max_y = np.max(region[:, 1]) + margin
self.frame = eye[min_y:max_y, min_x:max_x]
self.origin = (min_x, min_y)

height, width = self.frame.shape[:2]


self.center = (width / 2, height / 2)

def _blinking_ratio(self, landmarks, points):


"""Calculates a ratio that can indicate whether an eye is closed or not.
It's the division of the width of the eye, by its height.

Arguments:
landmarks (dlib.full_object_detection): Facial landmarks for the face region
points (list): Points of an eye (from the 68 Multi-PIE landmarks)

Returns:
The computed ratio
"""
left = (landmarks.part(points[0]).x, landmarks.part(points[0]).y)
right = (landmarks.part(points[3]).x, landmarks.part(points[3]).y)
top = self._middle_point(landmarks.part(points[1]), landmarks.part(points[2]))
bottom = self._middle_point(landmarks.part(points[5]), landmarks.part(points[4]))

eye_width = math.hypot((left[0] - right[0]), (left[1] - right[1]))


eye_height = math.hypot((top[0] - bottom[0]), (top[1] - bottom[1]))

53
"""
def init (self):
self.frame = None
self.eye_left = None
self.eye_right = None
self.calibration = Calibration()

# _face_detector is used to detect faces


self._face_detector = dlib.get_frontal_face_detector()

# _predictor is used to get facial landmarks of a given face


cwd = os.path.abspath(os.path.dirname( file ))
model_path = os.path.abspath(os.path.join(cwd,
"trained_models/shape_predictor_68_face_landmarks.dat"))
self._predictor = dlib.shape_predictor(model_path)

@property
def pupils_located(self):
"""Check that the pupils have been located"""
try:
int(self.eye_left.pupil.x)
int(self.eye_left.pupil.y)
int(self.eye_right.pupil.x)
int(self.eye_right.pupil.y)
return True
except Exception:
return False

def _analyze(self):
"""Detects the face and initialize Eye objects"""
frame = cv2.cvtColor(self.frame, cv2.COLOR_BGR2GRAY)
faces = self._face_detector(frame)

try:
landmarks = self._predictor(frame, faces[0])
self.eye_left = Eye(frame, landmarks, 0, self.calibration)
self.eye_right = Eye(frame, landmarks, 1, self.calibration)

except IndexError:
self.eye_left = None
self.eye_right = None
def refresh(self, frame):
"""Refreshes the frame and analyzes it.

Arguments:
frame (numpy.ndarray): The frame to analyze

54
"""
self.frame = frame
self._analyze()

def pupil_left_coords(self):
"""Returns the coordinates of the left pupil"""
if self.pupils_located:
x = self.eye_left.origin[0] + self.eye_left.pupil.x
y = self.eye_left.origin[1] + self.eye_left.pupil.y
return (x, y)

def pupil_right_coords(self):
"""Returns the coordinates of the right pupil"""
if self.pupils_located:
x = self.eye_right.origin[0] + self.eye_right.pupil.x
y = self.eye_right.origin[1] + self.eye_right.pupil.y
return (x, y)

def horizontal_ratio(self):
"""Returns a number between 0.0 and 1.0 that indicates the
horizontal direction of the gaze. The extreme right is 0.0,
the center is 0.5 and the extreme left is 1.0
if self.pupils_located:
pupil_left = self.eye_left.pupil.x / (self.eye_left.center[0] * 2 - 10)
pupil_right = self.eye_right.pupil.x / (self.eye_right.center[0] * 2 - 10)
return (pupil_left + pupil_right) / 2

def vertical_ratio(self):
"""Returns a number between 0.0 and 1.0 that indicates the
vertical direction of the gaze. The extreme top is 0.0,
the center is 0.5 and the extreme bottom is 1.0
"""
if self.pupils_located:
pupil_left = self.eye_left.pupil.y / (self.eye_left.center[1] * 2 - 10)
pupil_right = self.eye_right.pupil.y / (self.eye_right.center[1] * 2 - 10)
return (pupil_left + pupil_right) / 2

def is_right(self):
"""Returns true if the user is looking to the right"""
if self.pupils_located:
return self.horizontal_ratio() <= 0.40
def is_left(self):
"""Returns true if the user is looking to the left"""
if self.pupils_located:
return self.horizontal_ratio() >= 0.65

55
def is_center(self):
"""Returns true if the user is looking to the center"""
if self.pupils_located:
return self.is_right() is not True and self.is_left() is not True

def is_blinking(self):
"""Returns true if the user closes his eyes"""
if self.pupils_located:
blinking_ratio = (self.eye_left.blinking + self.eye_right.blinking) / 2
return blinking_ratio > 3.8

def annotated_frame(self):
"""Returns the main frame with pupils highlighted"""
frame = self.frame.copy()
if self.pupils_located:
color = (0, 255, 0)
x_left, y_left = self.pupil_left_coords()
x_right, y_right = self.pupil_right_coords()
cv2.line(frame, (x_left - 5, y_left), (x_left + 5, y_left), color)
cv2.line(frame, (x_left, y_left - 5), (x_left, y_left + 5), color)
cv2.line(frame, (x_right - 5, y_right), (x_right + 5, y_right), color)
cv2.line(frame, (x_right, y_right - 5), (x_right, y_right + 5), color)

return frame
"""
moving mouse cursor using opencv eyeball tracking logic
to move cursor we are using pyautogui

"""
#python library import statement
import cv2
from gaze_tracking import GazeTracking #python opencv gaze library to track eye ball
movement
import pyautogui

gaze = GazeTracking() #eye ball tracking object creation


webcam = cv2.VideoCapture(0) #starting web cam

while True:

_, frame = webcam.read() #reading frames from webcam

gaze.refresh(frame)#sending frame to opencv gaze library to detect eye ball movement

frame = gaze.annotated_frame() #returns eye ball movement data

56
text = ""

if gaze.is_blinking(): #displaying result


text = "Blinking"
elif gaze.is_right():
text = "Looking right"
elif gaze.is_left():
text = "Looking left"
elif gaze.is_center():
text = "Looking center"

cv2.putText(frame, text, (90, 60), cv2.FONT_HERSHEY_DUPLEX, 1.6, (147, 58, 31), 2)

left_pupil = gaze.pupil_left_coords()
right_pupil = gaze.pupil_right_coords() #getting pupil location as x and y cordinates

x = str(left_pupil).split(",") #getting left pupil x and y location


y = str(right_pupil).split(",") #getting right pupil x and y location
if len(x) > 1:
data_x = x[0]
data_x = data_x[1:len(data_x)];
data_y = x[1]
data_y = data_y[0:len(data_y)-1]
pyautogui.moveTo(int(data_x),int(data_y)) #moving mouse cursor to eye pupil x and y
left side location

if len(y) > 1:
data_x = y[0]
data_x = data_x[1:len(data_x)];
data_y = y[1]
data_y = data_y[0:len(data_y)-1]
pyautogui.moveTo(int(data_x),int(data_y)) #moving mouse cursor to eye pupil x and y
right sidelocation

cv2.putText(frame, "Left pupil: " + str(left_pupil), (90, 130),


cv2.FONT_HERSHEY_DUPLEX, 0.9, (147, 58, 31), 1)
cv2.putText(frame, "Right pupil: " + str(right_pupil), (90, 165),
cv2.FONT_HERSHEY_DUPLEX, 0.9, (147, 58, 31), 1)

cv2.imshow("EyeBall Cursor Movement", frame)

if cv2.waitKey(1) == 27:
break

57
import numpy as np
import cv2

class Pupil(object):
"""
This class detects the iris of an eye and estimates
the position of the pupil
"""

def init (self, eye_frame, threshold):


self.iris_frame = None
self.threshold = threshold
self.x = None
self.y = None

self.detect_iris(eye_frame)

@staticmethod
def image_processing(eye_frame, threshold):
"""Performs operations on the eye frame to isolate the iris

Arguments:
eye_frame (numpy.ndarray): Frame containing an eye and nothing else
threshold (int): Threshold value used to binarize the eye frame
Returns:
A frame with a single element representing the iris
"""
kernel = np.ones((3, 3), np.uint8)
new_frame = cv2.bilateralFilter(eye_frame, 10, 15, 15)
new_frame = cv2.erode(new_frame, kernel, iterations=3)
new_frame = cv2.threshold(new_frame, threshold, 255, cv2.THRESH_BINARY)[1]

return new_frame

def detect_iris(self, eye_frame):


"""Detects the iris and estimates the position of the iris by
calculating the centroid.

Arguments:
eye_frame (numpy.ndarray): Frame containing an eye and nothing else
"""
self.iris_frame = self.image_processing(eye_frame, self.threshold)

contours, _ = cv2.findContours(self.iris_frame, cv2.RETR_TREE,


.C

58
"""
self.frame = frame
self._analyze()

def pupil_left_coords(self):
"""Returns the coordinates of the left pupil"""
if self.pupils_located:
x = self.eye_left.origin[0] + self.eye_left.pupil.x
y = self.eye_left.origin[1] + self.eye_left.pupil.y
return (x, y)

def pupil_right_coords(self):
"""Returns the coordinates of the right pupil"""
if self.pupils_located:
x = self.eye_right.origin[0] + self.eye_right.pupil.x
y = self.eye_right.origin[1] + self.eye_right.pupil.y
return (x, y)

def horizontal_ratio(self):
"""Returns a number between 0.0 and 1.0 that indicates the
horizontal direction of the gaze. The extreme right is 0.0,
the center is 0.5 and the extreme left is 1.0
if self.pupils_located:
pupil_left = self.eye_left.pupil.x / (self.eye_left.center[0] * 2 - 10)
pupil_right = self.eye_right.pupil.x / (self.eye_right.center[0] * 2 - 10)
return (pupil_left + pupil_right) / 2

def vertical_ratio(self):
"""Returns a number between 0.0 and 1.0 that indicates the
vertical direction of the gaze. The extreme top is 0.0,
the center is 0.5 and the extreme bottom is 1.0
"""
if self.pupils_located:
pupil_left = self.eye_left.pupil.y / (self.eye_left.center[1] * 2 - 10)
pupil_right = self.eye_right.pupil.y / (self.eye_right.center[1] * 2 - 10)
return (pupil_left + pupil_right) / 2

def is_right(self):
"""Returns true if the user is looking to the right"""
if self.pupils_located:
return self.horizontal_ratio() <= 0.40
def is_left(self):
"""Returns true if the user is looking to the left"""
if self.pupils_located:
return self.horizontal_ratio() >= 0.65

59
contours = sorted(contours, key=cv2.contourArea)

try:
moments = cv2.moments(contours[-2])
self.x = int(moments['m10'] / moments['m00'])
self.y = int(moments['m01'] / moments['m00'])
except (IndexError, ZeroDivisionError):
pass

Fig.A.1-DETECTING EYE REGION

60
Fig.A.2-DETECTING LEFT SIDE EYE

61
10.2 APPENDIX B

CONFERENCE
PRESENTATION

Our paper on EYE BALL CURSOR MOVEMENT USING OPEN CV is going to be presented
at the
IEEE 2024 Conference hosted by VIT , Vellore. Shortlisted terms will present their papers on
various fields in the conference. Our paper got accepted as paper ID/Submission: 1156

Fig.B.1 IEEE 2024 Acceptance

62
10.3 APPENDIX C

PLAGARISM

63

You might also like