0% found this document useful (0 votes)
7 views

Report -3

The document presents a project report on 'Autonomous Proctoring Software' developed by students at SRM Institute of Science and Technology, aimed at enhancing academic integrity during online examinations. The software, named ProctorPro, utilizes advanced machine learning, real-time behavioral analysis, and privacy-preserving techniques to detect cheating while minimizing false positives. The report outlines the project's objectives, methodologies, acknowledgments, and results, demonstrating ProctorPro's effectiveness in maintaining secure and fair testing environments.

Uploaded by

ss8729
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Report -3

The document presents a project report on 'Autonomous Proctoring Software' developed by students at SRM Institute of Science and Technology, aimed at enhancing academic integrity during online examinations. The software, named ProctorPro, utilizes advanced machine learning, real-time behavioral analysis, and privacy-preserving techniques to detect cheating while minimizing false positives. The report outlines the project's objectives, methodologies, acknowledgments, and results, demonstrating ProctorPro's effectiveness in maintaining secure and fair testing environments.

Uploaded by

ss8729
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 73

Autonomous Proctoring Software

A Major Project Report


Submitted by

SHREYAS RAO V.S (RA2111030010165)


BAVISHYATH. K.R (RA2111030010174)
ABHISHEK DEVAGUDI REDDY (RA2111030010164)
ABHAYKRISHNA B S (RA2111030010168)
Under the Guidance of

MRS. V VIJAYALAKSHMI
(Assistant Professor, Department of Networking and Communications)

in partial fulfillment of the requirements for the degree of

BACHELOR OF TECHNOLOGY
in
COMPUTER SCIENCE AND ENGINEERING
with specialization in CYBER SECURITY

DEPARTMENT OF NETWORKING AND COMMUNICATIONS


SCHOOL OF COMPUTING
COLLEGE OF ENGINEERING AND TECHNOLOGY

SRM INSTITUTE OF SCIENCE AND TECHNOLOGY


KATTANKULATHUR- 603 203

MAY 2025
Department of Networking and Communications

SRM Institute of Science & Technology


Own Work* Declaration Form
To be completed by the student for all assessments
Degree/ Course : B.Tech CSE Spl(Cyber Security)
Student Name : Shreyas Rao, Bavishyath, Abhishek Reddy, Abhay Krishna

Registration Number : RA2111030010165, RA2111030010174, RA2111030010164, RA2111003010169

Title of Work : Autonomous Proctoring Software

We hereby certify that this assessment compiles with the University's Rules and Regulations relating
to Academic misconduct and plagiarism, as listed in the University Website, Regulations, and the
Education Committee guidelines.

We confirm that all the work contained in this assessment is my / our own except where indicated,
and that we have met the following conditions:

• Clearly referenced / listed all sources as appropriate


• Referenced and put in inverted commas all quoted text (from books, web, etc)
• Given the sources of all pictures, data etc. that are not my own
• Not made any use of the report(s) or essay(s) of any other student(s) either past or present
• Acknowledged in appropriate places any help that I have received from others (e.g fellow
students, technicians, statisticians, external sources)
• Compiled with any other plagiarism criteria specified in the Course handbook / University
website
We understand that any false claim for this work will be penalized in accordance with the university
policies and regulations.

DECLARATION:

We aware of and understand the University's policy on Academic misconduct and plagiarism and
we certify that this assessment is my / our own work, except were indicated by referring, and that
we have followed the good academic practices noted above.

Shreyas Rao V.S [RA2111030010165] Bavishyath [RA2111030010174]

Abhishek Reddy [RA2111030010164] Abhay Krishna [RA2111003010169]

DATE:

ii
ACKNOWLEDGEMENT

We express our humble gratitude to Dr. C. Muthamizhchelvan, Vice-Chancellor, SRM


Institute of Science and Technology, for the facilities extended for the project work and his
continued support.
We extend our sincere thanks to Dean-CET, SRM Institute of Science and Technology, Dr.
Leenus Jesu Martin M, Dean-CET, SRM Institute of Science and Technology, for her invaluable
support throughout the project .
We wish to thank Dr. Revathi Venkataraman, Professor & Chairperson, School of Computing,
SRM Institute of Science and Technology, for her support throughout the project work.
We encompass our sincere thanks to Dr.M.Pushpalatha, Professor and Associate Chairperson
CS, School of Computing and Dr. C Lakshmi, Professor and Associate Chairperson – AI,
School of Computing, SRM institute of Science and Technology, for their invaluable
support.
We are incredibly grateful to our Head of the Department, Dr. M. Lakshmi, Professor and Head,
Department of Networking and Communications, School of Computing, SRM Institute of Science
and Technology, for her suggestions and encouragement at all the stages of the project work.
We want to convey our thanks to our Project Coordinator, Dr. G. Suseela, Associate Professor,
Panel Head, Dr. P Balamurugan, Assistant Professor and Panel members, Mrs. V Vijayalakshmi,
Dr. H Karthikeyan, Dr. Mursal Hamdani Assistant Professors, Department of Networking and
Communications, School of Computing, SRM Institute of Science and Technology, for their inputs
during the project reviews and support.
We register our immeasurable thanks to our Faculty Advisor, Dr. A Arun, Department of
Networking and Communications, School of Computing, SRM Institute of Science and
Technology, for leading and helping us to complete our course.
Our inexpressible respect and thanks to our guide, Mrs. V Vijayalakshmi, Assistant Professor,
Department of Networking and Communications, SRM Institute of Science and Technology, for
providing us with an opportunity to pursue our project under his/her mentorship. She provided us
with the freedom and support to explore the research topics of my/our interest. Her passion for
solving problems and making a difference in the world has always been inspiring.
We sincerely thank the Networking and Communications department staff and students, SRM
Institute of Science and Technology, for their help during our project. Finally, we would like to
thank parents, family members, and friends for their unconditional love, constant support, and
encouragement.

Shreyas Rao V.S (RA2111030010165)


Bavishyath K.R (RA21l 1030010174)
Abhishek Devagudi Reddy (RA2111030010164)
Abhaykrishna B S (RA2111030010168)

iii
SRM INSTITUTE OF SCIENCE AND TECHNOLOGY

KATTANKULATHUR – 603 203


(Under the Section 3 of UGC Act, 1956)

BONAFIDE CERTIFICATE

Certified that 18CSP108L project report titled “Autonomous proctoring system” is the bonafide
work of Shreyas Rao V.S(RA211103001016) Bavishyath K.R (RA2111030010174) Abhishek
Devagudi Reddy (RA2111030010164) AbhayKrishna B S (RA2111030010168), who carried out
the project work under my supervision. Certified further, that to the best of my knowledge the work
reported herein does not form any other project report or dissertation based on which a degree or
award was conferred on an earlier occasion on this or any other candidate.

Mrs.VIJAYALAKSHMI V Dr. LAKSHMI M


Supervisor Assistant Head of the Department Professor
Professor
Department of Networking and
Department of Networking and Communications
Communications

Internal Examiner External Examiner

iv
ABSTRACT

The new testing experiences should promote academic integrity as remote learning is growing rapidly during a
pandemic. Most proctoring systems today use either intense human supervision or static AI models, which are
invasive, difficult to scale, or sensitive to false positives. In this paper, we describe a new autonomous proctoring
software system ProctorPro that addresses these drawbacks by leveraging advanced machine learning, real-time
behavioral analysis, and privacy-preserving techniques. It is based on a modular system with continuously-
recorded video and audio footage, intelligent anomaly detection, real-time reporting and biometric authentication.
Using facial recognition, gaze tracking, and sound signal analysis, the system detects abnormal behaviors,
impersonation, and environmental irregularities. When compared to the traditional solutions, ProctorPro is
adaptive to various testing contexts, it can eliminate more false-positive alarm events and adapt to equitable
evaluation by learning contextual traces and students' habits over time. To prevent privacy dangers, the programme
employs methods, for example, such as data minimization, real-time encryption and federated learning so that the
private data does not need to be stored and cannot be so that it runs the risk of being misused. Edge computing is
also implemented for reducing latency and bandwidth utilization, allowing the system to scale effortlessly as
support for extensive examinations. Results from experimental study confirmed ProctorPro's capabilities of
efficiently detecting potential cheating behaviour with a 94.5% rate while maintaining a 3.2% false positive rate.
The system was also stress tested under simulated multiplayer conditions, poor networks and simulated cheating
scenarios, which confirmed the reliability and robustness of the system. The paper also explored future
developments supported by blockchain technology-based exam records and VR-based testing environments and
multimodal biometric verification. This comes together as a thin balance of automation, integrated ethical
security, and supervision, and is an advanced solution for educational institutions looking to promote trust,
transparency, and integrity for their online examinations

v
TABLE OF CONTENTS

Chapter No Title Page No


ABSTRACT v
LIST OF FIGURES viii
LIST OF TABLES x
ABBREVIATIONS xi
1 INTRODUCTION 1
1.1 Introduction to AI-Driver Autonomous 1
Proctoring
1.2 Need for Autonomous Proctoring System 2
1.3 Hardware Requirements 3
1.4 Product Vision Statement 5
1.5 Product Goal 7
1.6 Product Backlog 8
1.7 Product Release Plan 10
2 SPRINT PLANNING AND EXECUTION 11
2.1 Sprint 1 11
2.1.1 Sprint Goal with User Stories of 11
Sprint 1
2.1.2 Functional Document 15
2.1.3 Architecture Document 17
2.1.4 UI Design 19
2.1.5 Functional Test Cases 20
2.1.6 Daily Call Progress 21
2.1.7 Committed vs Completed User 22
Stories
2.1.8 Sprint Retrospective 23

vi
2.2 Sprint 2 23
2.2.1 Comparative Analysis of Models 23
2.2.2 Functional Document 28
2.2.3 Architecture Document 31
2.2.4 UI Design 33
2.2.5 Functional Test Cases 34
2.2.6 Committed vs Completed User 35
Stories
2.2.7 Sprint Retrospective 36
2.3 Sprint 3 36
2.3.1 Sprint Goal with User Stories of 37
Sprint 3
2.3.2 Functional Document 41
2.3.3 Architecture Diagram 44
2.3.4 UI Design 46
2.3.5 Functional Test Cases 47
2.3.7 Committed vs Completed User 48
Stories
2.3.8 Sprint Retrospective 49
3 RESULT AND DISCUSSIONS 50
3.1 Project Outcomes 50
4 CONCLUSION AND FUTURE 51
ENHANCEMENT
5 REFERENCE 55
10 APPENDIX 57
A CONFERENCE PUBLICATION

B SAMPLE CODE

C PLAGIARISM REPORT

vii
LIST OF FIGURES

Table No Title Page No

1.1 MS Planner Board of Autonomous Proctoring System 9

1.2 Release plan of Autonomous Proctoring Software 10

2.1 User story for user registration 12

2.2 User story for Theme and font selection 13

2.3 User story for developer 14

2.4 System Architecture diagram 18

2.5 UI design for login page 19

2.6 Standup Meetings 21

2.7 Bar graph for Committed Vs Completed User Stories 22

2.8 User story for Facial Re-Verification 25

2.9 User Stories for Gaze and Head Movement 26

2.10 User stories for Multi-Face Detection 26

2.11 Audio Anomaly Detection 27

2.12 User Stories for Real-Time Proctor Alerts 27

viii
2.13 UI Design 33

2.14 Bar graph for Commited vs Completed User Stories 35

2.15 User Stories for Proctor Dashboard 38

2.16 User Stories for Flag Review and Manual Control 39

2.17 User Stories for Session Recovery 39

2.18 User Stories for Report Exporting 40

2.19 User Stories for Customizable exam UI 40

2.20 UI Design 46

2.21 Bar graph for Committed vs Completed User Stories 48

ix
LIST OF TABLES

Table No Title Page No

2.1 Detailed User Stories of sprint 1 11

2.2 20
Functional Test Case of sprint 1

2.3 23
Sprint Retrospective for Sprint 1

2.4 23
Detailed User Stories of sprint 2

2.5 34
Functional Test Case for Sprint 2

2.6 36
Sprint Retrospective for Sprint 2

2.7 37
Detailed User Stories for Sprint 3

2.8 47
Functional Test Case of sprint 3

2.9 49
Sprint Retrospective for Sprint 3

x
ABBREVIATIONS

AES – Advanced Encryption


Standard ANN – Artificial Neural
Network CNN – Convolutional Neural
CV – Compute
Vision DB – Database

xi
CHAPTER 1
INTRODUCTION

1.1 Introduction to Autonomous Proctoring Software:


Given recent years, the academic contours have witnessed a complete transformation. Instead
of virtual classes and distance learning, schools and colleges had to reconsider their ways of
conducting examinations. Online tests may be convenient and flexible, but they also come with
certain new challenges, with the most dominating one being how to uphold the integrity of
academics during unsupervised testing. Testing practices that have been used throughout the
years, either via manual human monitoring or through some very simple screen monitoring
software, do not suffice anymore because of the geographical location at which the test might
be taken and because it is no longer practical to oversee assessments from across many shores
and many time zones..

On account of the speed and high demand for secure, effective, and scalable testing tools, a lot
of artificial intelligence-based remote proctoring technologies have already come up. The
currently existing AI-based remote online proctoring tools have a considerable number of
drawbacks; they can be either overly intrusive, racially and gender biased, or just incapable of
recognizing behavior that indicates integrity misconduct. For legitimate reasons, looking away
from the screen and some physical changes in position reported by students have invoked
anxiety and, in some cases, undue penalties. This all offers a proposition for an intelligent
autonomous proctoring system: ProctorPro. Designed empathetically, and laying great
emphasis on integrity, ProctorPro will be premised on cutting-edge technologies such as
machine learning, computer vision, and behavioral analysis. ProctorPro will be able to identify.

1
1.2 Motivation
This project is based on an intense and increasing outcry in formal education: how to effectively
uphold and maintain academic integrity in a growing digital and remote world?

With rapid online learning shift worsened by the COVID-19 pandemic, and assessments and
evaluations not being held on a typical classroom basis, students take major tests-from home,
maybe across boundaries, time zones, and in very different environments from each other.
Hence, the setting offers an overwhelming flexibility and equity of choice. At the same time,
it takes on new risks: cheating, impersonation, and lack of meaningful and fair assessment for
all the examined. Traditional methods of proctoring can't keep up. Human proctors are not
scalable, and the mental load of monitoring hundreds of students by webcam is enormous.
Early solutions that used AI to proctor exams are currently receiving backlash for being too
invasive, unreliable, and ethically questionable. Some students feel uneasy, and some systems
process a student's need to look away to think, or an unavoidable background noise, as
dishonest behavior which not only creates stress, but undermines trust.Hide them.The greatest
gap identified by these challenges is a more intelligent and humane response to supporting
assessments in online courses.

2
1.3 Sustainable Development Goal of the Project
The Autonomous Proctoring Software project represents not just a technological innovation;
it's a step toward a smarter, fairer, and more accessible future for education. Besides
contributing to SDG 16: Peace, Justice and Strong Institutions, this solution addresses several
areas of SDG 9: Industry, Innovation and Infrastructure, and targets SDG 4: Quality Education.
This project fills the gap between academic integrity and digital accessibility, making it a
responsible technological solution to some of today's most important global educational
issues.

SDG 4: Quality Education

Ensure inclusive and equitable quality education and promote lifelong learning
opportunities for all. In a rapidly digitalizing world, our education must adapt to stay inclusive
and credible. Online learning has increased access for millions, especially for those in far-off
or poorly served areas. Access is only part of what’s needed — credibility and fairness in
assessments also matter. Online education is not something to be mistreated as students who
choose this route are often those who need flexibility. This project contributes toward SDG 4
through: Student Assessment: Remote learning wherever students may be – assessors can trust
the evaluation process is safe, monitored and reliable.

Levelling the Playing Field: Instead of invasive techniques or biased assumptions, which risk
marginalizing students from different backgrounds, abilities, or environments, the system
harnesses behaviour based analysis.

Lowering the Barriers of Entry: System is built for off the shelf scalability, and a low-
bandwidth footprint it also functions quite well in low internet infrastructure developing
nations making quality education assessments possible for more the global populace.

SDG 9: Industry, Innovation and Infrastructure

Goal 9: Build resilient infrastructure, promote sustainable industrialization and foster


innovation. An autonomous proctoring system for the use of AI, whose technical complexity
is by no means the sole measure of achievement—this is indeed an innovative application as
well as a socially responsible application of technology. Edge computing, computer vision and
adaptive machine learning come together to tackle a real-world problem with practical,
scalable solutions.

3
How it contributes: Encouraging Ethical AI Advancements: Using ethical AI models to derive
insights from student data, combined with data minimization and encryption to protect privacy.
Supports Infrastructure for Remote Learning: It thus fosters a more sustainable and distributed
exam platform with less reliance on costly hardware or centralized systems.. Fosters
Technological Equity: Support technology innovation in education that can be tailored to the
unique needs of urban and rural users.

SDG 16: Peace, Justice and Strong Institutions

Promote peaceful and inclusive societies for sustainable development, provide


access to justice for all and build effective, accountable and inclusive institutions at all levels.
In places of learning, justice and accountability look like fair assessments and equitable
opportunity. Surveillance tools that are biased or invasive can erode trust and the wellbeing of
students. It helps by promoting transparency and inclusivity, and allows institutions to maintain
integrity without breaking the ethics rules.

This project contributes to SDG 16 by:

Lowering Exam Bias and Wrong Procedures — Unlike humans, trained AI can
easily pinpoint real deviations and leave out normal instances such as diverting one's eyes to
think, adjusting a position in a more relaxed posture, thus dropping false flags.

Building Institutional Trust: This offers schools and universities a method that complies with
an ethical and legal framework (GDPR, student consent, data transparency);

Supporting Student Rights: Storing limited use data and deleting real-time footage after
analysis preserves the dignity of students’ digital identity and limits long term risk.

Broader Impact: In a world where the march of digital transformation frequently outstrips the
pace of regulation and ethics, this project also serves as a beacon of ethical technology
progress. It integrates and streamlines education, technology, and justice. Whether it is helping
a student from a remote village to take a certified exam or empowering a teacher to conduct
fair assessments without the fear of cheating, the solution touches lives in so many ways.

But for what it is software is a solution, but at its core, this is about taking a step towards an
education system that is inclusive, secure, and sustainable.

4
1.4 Product Vision Statement
1.4.1The Vision

We envision a world in which all students, regardless of where they are located geographically,
what they are, or where they study, are able to test with fairness and confidence, without ever
believing they are under observation, scrutiny, or suspicion. In that world, online testing is not
second best, but secure, safe, and liberating. Proctor Pro is our solution that begins to create
such a world. Proctor Pro is more than proctoring software. It is a trust-enabling platform driven
by innovative technologies such as artificial intelligence, behavior analysis, and real-time
monitoring to enable a new paradigm of ethical, intelligent, and scalable exam monitoring, not
control. It gets the right balance between academic integrity needs and empathy, privacy, and
user comfort. We're not here to establish an all surveillance system. We're here to humanize
exam surveillance and provide teachers with confidence and provide students with the respect
they deserve in a more cyber age.

1.4.2 What We Mean to Fix

Remote learning has increased by leaps and bounds, but how we administer exams hasn't.
Present proctoring systems are either:

● Too labor-intensive with human invigilators and not scalable.


● Too rigid, flagging harmless behavior as misconduct,

Or too invasive, invading students' privacy and giving them anxiety.

These legacy systems are creating suspicion, fear, and even guilt-ridden student dropouts who
are unable to defend themselves until their names are cleared. Teachers, meanwhile, are being
saddled with false positives and in the absence of systems that offer actionable intelligence We
believe that technology must work for humanity—not the other way around. We're working
diligently to bridge this gap with a smart, fair, and humane product built.

1.4.3 Our Core Values Under Our Vision

● Trust Trust underlies any learning environment. Our software doesn't merely
detect—it gains the trust of learners and educators by being transparent,
explainable, and equitable.

5
● Empathy: Students are not machines. They walk, they think aloud, and sometimes
they won't even look at you. Our system learns what suspicious behavior is and
what normal human behavior is, minimizing false alarms and stress.
● Scalability We envision this platform being used by schools, universities, and
certifying organizations globally. ProctorPro was built to track hundreds of
students simultaneously without sacrificing performance or integrity.
● Privacy-First Design: In a world where information can be so easily misused,
Proctor Pro ensures privacy is never an afterthought. Real-time encryption, data
retention to the barest essentials, and responsible AI practices are ingrained in the
very fabric.
● Inclusivity: Our vision is for every student—students with disabilities, low-end
device students, or students in unstable settings. The system adapts, not students.

1.4.4 The Future We're Building

With ProctorPro, schools will never be forced to compromise on credibility at the


expense of convenience. Online exams can be taken with the same confidence as in the
classroom. Teachers will have clarity dashboards, not confusion. Learners will be able to study
in peace, not fear of the system peering over them. In the long run, this product will be: Support
voice recognition, iris scanning, and gesture detection. Seamlessly integrate with learning
management systems for streamlined exam process And offer comprehensive post-exam
analysis that guides teaching, not merely flag cheating. It's not just a software piece, ProctorPro
is a statement—education technology can and should be built with a sense of responsibility,
empathy, and vision. We're creating a platform which allows institutions to maintain honor,
but one which also offers learners the right to grow with dignity.

6
1.5 Product Goal
The ultimate objective of ProctorPro is to furnish a shrewd, secure, and student-centric solution
through which assessments conducted online may uphold academic integrity concerning the
experience and respect for the user in terms of privacy and accessibility.

This product is intended to 'rebuild the trust' in between with which an institution puts its
students into an online examination. This is by employing artificial intelligence to monitor the
way student behaves during examination and thus replacing the rigid rules or not requiring a
permanent human supervision to check how students do during examination time. ProctorPro
identifies even potential violations as they occur in real-time but interprets that a glance at the
clock or a nervous fidget is not a sign of academic dishonesty. The machine-learning-based
solution is context-based rather than merely active-based. The solution not only allows reliable
observations but also supports monitoring for online assessments with hundreds of concurrent
users, without dropping any performance. All this assures institutional trust for facilitating a
seamless experience in testing, along with an auditing report detailing student behavior.

A critical component of our objective is to focus on privacy and ethics. Every aspect of
ProctorPro's services—facial recognition, screen recording, and anomaly detection—is
governed by compliance with formalized federal data protection requirements. Our strict
standards for real-time encryption, limited data retention, and consent-based usage make our
solution as much about student rights and privacy as it is about academic integrity.The focus
on students is much more than posturing to institutions. Our product is intended to alleviate
anxiety typically associated with AI-proctoring solutions, through clear, actionable feedback,
an ease of use interface for diverse situations—accessibility, device variations, connectivity
issues, etc.To summarize, ProctorPro's product objective is to implement a scaleable, privacy-
centric, and compassionate remote proctoring solution that protects the academic integrity of
online education, and respects the unique needs and human dignity of every learner. It is a
progressive solution that simultaneously reconciles safety and trust in the digital education
space.

7
1.6 Product Backlog

S.No User Stories of Autonomous Proctoring Software

#US1 As a new user I need to easily register for the platform so that I can gain access
to its feature as per the Access

#US2 As a used I need to have availability of choosing the theme like system default
theme , light theme , and dark theme

#US3 As a user I need to have MFA and Biometric authentication for login and
registration purpose

#US4 As a user I want to engage in forums and group activities so that I can share and
collaborate with others.

#US5 As a user I want the interface should be simple and clean so that I can easily
access the software.

#US6 As a user, I want to create and browse skill-sharing listings so that I can offer
my skills to others and discover opportunities to learn new skills from others.

#US7 As a user The student and Teacher must have peer to peer communication
between them

#US8 The students must receive the results as soon as they complete the exams.

#US9 If there is any internet issues during the exam their must be resume the exam
after retaining the network

8
The product backlog of Autonomous Proctoring Software was configured using the MS planner
Agile Board which is represented in the following Figure 1.1. The Product Backlog consists of
the complete user stories of Ai based E-learning Application.

Each user story consist of necessary parameters like MoSCoW prioritization, Functional and
non functional parameters, detailed acceptance criteria with linked tasks.

Figure 1.1 MS Planner Board of Autonomous Proctoring System

9
1.7 Product Release Plan

The following Figure 1.2 depicts the release plan of the project

Figure 1.2 Release plan of Autonomous Proctoring Software

10
CHAPTER 2

SPRINT PLANNING AND EXECUTION

2.1Sprint 1
2.1.1 Sprint Goal with User stories of Sprint 1
The goal of the first sprint is to Establish foundational components, define system roles, prepare
architecture, and initiate biometric authentication

The following table 2.1 represents the detailed user stories of the sprint 1

Table 2.1 Detailed User Stories of sprint 1

S.No Detailed User Stories

US #1 As a new user I need to easily register for the platform so that I can gain
access to its feature as per the Access

US#2 As a used I need to have availability of choosing the theme like system default
theme , light theme , and dark theme

US#3 As a user I need to have MFA and Biometric authentication for login and
registration purpose

US#4 As a user I want to engage in forums and group activities so that I can share
and collaborate with others

Planner Board representation of user stories are mentioned below figures 2.1,2.2 and 2.3

11
Figure 2.1 User story for user registration

12
Figure 2.2 User story for Theme and font selection

13
Figure 2.3 User story for developer

14
2.1.2Functional Document
2.1.2.1 Introduction
The goal of the Proctor Pro project's first sprint is to build a strong functional and technical
foundation for the autonomous proctoring system. Establishing essential elements such as
role-based access control, facial biometric configuration, secure user registration, and the
preliminary exam scheduling framework is the primary focus of this sprint. This is a
foundational sprint, ensuring that all users are properly authenticated, privacy concerns are
addressed from the start, and the system is constructed to accommodate future features in a
scalable, modular fashion.

2.1.2.2 Product Goal

Establishing the baseline infrastructure needed for a credible and safe remote inspection
platform is Sprint 1's main objective. This includes making sure that:

● Users may register and log on safely.


● Future identity verification uses biometric data—that which includes facial scans.
● Users are assigned clearly defined permissions to particular roles—such as Student,
Proctor, Admin).
● Exam schedules may be created and configured by administrators.
● Presenting users with privacy policies and requiring clear permission before allowing
any type of monitoring guarantees openness on the platform.
● These features together maintain technical credibility and ethical responsibility while
preparing the system for live exam monitoring in later phases.
2.1.2.3 Business Processs
The major business processes included in Sprint 1 are:
● User Registration: New users can register with their email address and designate a
secure login. After registration, users are asked to build their profile—we're using a
facial scan for biometric verification.
● Role Assignment: An administrator assigns access to registered users based on
predefined roles—Student, Proctor, or Admin. Each role is assigned a pre-defined level
of permissions to secure relevant access and appropriately limit misuse.
● User Authentication: The system authenticates users in the traditional way
(email/password), as well as by facial recognition, to securely check a user in before
taking an exam.

15
● Exam Setup: Instructors or administrators can set up future exams, create basic options
(date/time/duration), and select registered students.
● Privacy Notification: Any monitoring or recording elements will not commence until
students have viewed a privacy policy and affirmatively consented.

2.1.2.4 Features

Feature 1: Registration

● Description: Allows new users to register users with an email address and establish
a secure password.
● User Story: US001 - As a new user, I want to register with email/password and
access the delivery platform.

Feature 2: Biometric Setup

● Description: Captures the user's facial information to be used for identity


verification at the time of the exam is taken.
● User Story: US002 - As a user, I want to establish facial verification as part of my
user profile so I can identity verifiably take exams.

Feature 3: Role based access

● Description: The admin has the ability to assign user roles (i.e. Student, Proctor,
Admin) with their corresponding access rights and capabilities of the system.
● User Story:US003 - As an admin user, I want to assign user roles will define
identifiable access rights that are also correspondent to access levels.

Feature 4: Exam Events Scheduler

● Description: Admin or systems personnel create an exam event by selecting


attributes; time, subject, and selecting which users can take each exam.
● User Story:US004 - As an instructor, I want to create exam events that will allow
the students to remotely take the exams.
Feature 5: Privacy & Consent

● Description: Enables users to be identifiable with respect to the potential sensitivity


of their user data and they will understand how the system is monitoring their
activity once they proceed.

16
● User Story:US005 - As a User, I want to read and consent to privacy policies prior
to sitting an exam.

2.1.2.7 Assumption

1. User Base is Technically Capable of Basic Onboarding We assume students, admin and
proctors have a device with internet access and can comfortably do things like register
an account, log in, and understand basic on-page messages regarding what they need to
do. Users will be assumed to have a basic operating knowledge of webcam-enabled
technology, therefore can begin the onboarding process.
2. Device Has a Webcam & Microphone For facial biometric registration and
authentication to work, we assume users have devices (laptop, tablet or phone) with
new or good working order webcams. The webcam will be used in future sprints for
live monitoring, but in Sprint 1, the webcam is used solely for this registration face
capture process.
3. Facial Recognition is Limited to Identity Authentication During Sprint 1, facial
recognition will only be done to meet user identity authentication. It will also be used
later for pre exam validation purposes. There will not be real-time tracking or
monitoring, and this phase does not include AI based behavioural analysis.
4. Biometric Data is Secured and Managed Ethically We assume facial data collected
during registration is in a secure, encrypted format and stored in a way compliant to
privacy legislation (e.g., GDPR). Consent will always be obtained before biometric data
is collected.

2.1.3 Architecture Document


2.1.2.1 Application

System Components (Microservices-Oriented)

1.Authentication Service

● Performs user login/registration via email/password


● Interacts with biometric module (facial recognition)
● Generates a secure JWT token after a user is authenticated

2.Biometric Service-

● Captures and stores facial scan at registration

17
● Validates the face prior to the exam
● Interacts with the frontend via WebRTC/Webcam API

3.Role Management Service

● Will assign a user role (Student, Admin, Proctor)


● Will manage access to certain features of the platform using role
management/permissions

4.Exam Management Service

● Enables an admin to create/manage exams and their schedules


● Stores the metadata, such as exam name, date/time and for which users

5.Privacy Compliance Module

● Displays privacy policy and captures consent from user


● Ensures that no data monitoring will commence until policy is accepted

6.Data Storage & Flow Database (Postgres - via Supabase):

● Stores user profiles, roles and scheduled exams


● Biometric hashes of scanned facial images are stored encrypted as opposed to actual
images
7.APIs (REST Based):

● Each of the services communicate with one another via secure REST APIs
● Assures modularity, isolation of critical services

Figure 2.4 System Architecture diagram

18
2.1.4 UI DESIGN

Figure 2.5 UI design for login page

19
2.1.5 Functionality Test Case

Table 2.2 Detailed Functional Test Case

20
2.1.6 Daily Call Progress

Figure 2.6 Standup meetings

21
2.1.7 Committed Vs Completed User Stories

Figure 2.7 Bar graph for Committed Vs Completed User Stories

22
2.1.8 Sprint Retrospective

Table 2.3 Sprint Retrospective for the Sprint 1

2.2SPRINT 2
2.2.1 Sprint Goal with User Stories of Sprint 2

The goal of the second sprint is to Establish foundational components, define system roles,
prepare architecture, and initiate biometric authentication

The following table 2. represents the detailed user stories of the sprint 2

Table 2. 4 Detailed User Stories of sprint 2


S.No Detailed User Stories
US #1 As a student, I want the system to authenticate my identity again during the
exam, to confirm it is really me and that no impersonation is taking place.
US#2 As a system, I want to capture where the student is looking during the exam, so
that I can observe when they are frequently distracted or glancing off-screen
US#3 As a proctor, I want alerts if there are more than one face in a student's web
camera so that I can intervene to prevent cheating/being helped in person
US#4 As a system, I want to listen for background voices or suspicious sounds, so I
can detect possible cheating or conversations that may take place during the
exam
US#5 .As a proctor, I would like alerts as soon as a student's behavior is suspicious so
that I can take a prompt action to support exam integrity.

23
Planner Board representation of user stories are mentioned below figures 2.1,2.2 and 2.3

24
Figure 2.8 User story for Facial Re-Verification

25
Figure 2.9 User Stories for Gaze and Head Movement

Figure 2.10 User stories for Multi-Face Detection

26
Figure 2.11 Audio Anomaly Detection

Figure 2.12 User Stories for Real-Time Proctor Alerts

27
2.2.2 Functional Document

2.2.2.1. Introduction

As we move into Sprint 2, we will see the most significant transition in the development of
ProctorPro. So far, we've been in the system set up phase, but moving forward to Sprint 2, we
will begin to build the actual AI powered monitoring capabilities. In this sprint, we will
introduce real time exam monitoring capabilities like facial re verification, gaze tracking,
multiple face detection and audio based anomaly alerts. We think about these features with
accuracy, privacy, and user centricity in mind.

The idea is to build intelligence into the system so it can make contextual decisions during the
exams without imposing too much on the proctors nor negatively impact student's comfort.

2.2.2.2 Product Goal

To summarize, the main purpose of Sprint 2 is to build a smart monitoring layer in the platform
that can observe a student's behavior during an exam and intelligently flag inappropriate
behaviors. Unlike traditional means of monitoring students during exams, the solution
proposed in this sprint:

● Uses machine learning to understand typical behavior vs. anomalies.


● Only alerts proctors, in real-time, as necessary.
● Protects user privacy by processing data locally (edge computing) and encrypting
sensitive streams.

The AI, by nature, does not make assumptions; it learns from patterns and evolving typical
movement and actions of students.

2.2.2.3. Business Process

The behavioral monitoring and real-time alert workflows included in Sprint 2 are as follows:

● Facial Re-Verification: Before and during an exam, the system checks periodically to
ensure the same person is present. This will help avoid issues with impersonation.
● Eye Gaze & Head Movement Monitoring: The AI will keep an eye upon head and
eye movement and will be able to then detect when a student is frequently looking
away, which may raise alarms.

28
● Multiple Face Detection: The system searches for extra faces in the camera, which
would demonstrate that someone may be assisting the student.
● Background Voice Monitoring: Audio is constantly being scanned for whispers and
background conversation with sound analysis tools.
● Anomaly Detection & Real-Time Alerts: Anomaly detection will flag behavior in
real-time and notify the assigned proctor in a subtle manner.

The processes mentioned happen quietly in the background without the student noticing, with
the instructor's knowledge.

2.2.2.5. Features

This project focuses on implementing the following key features:

Feature 1: Facial Re-Verification

● Description: Verifies that the student’s face before and during the exam is the same as
what is registered as biometric data.
● User Story: US006 ‫ ـ‬As a student, I want the application to do facial re-verification
while running exam, to avoid impersonation.

Feature 2: Eye Gaze & Head Movement Tracking

● Description: Uses AI to check if the student is looking away from the screen regularly.
● User Story: US007 – As a system, I need to track the gaze and head movement in order
to detect distractions.

Feature 3: Detection Based on Multiple Faces

● Description: Alerts when more than 1 face is observed in the camera feed
● User Story: US008 - As a proctor, I want notification if multiple faces are detected on
screen

Feature 4: Background Audio Monitoring

● Description: Listens via mic to find whispers or unauthorized discussing in an exam.


● User Story: US009 - As a system, I want to intercept forbidden communication by
tracking background audio

29
Feature 5:ML Based Behavior Anomaly Detection

● Description: Learns the expected behavior of a student in real time and flags deviation
from expected norms as potentially suspicious.
● User Story: US010 - As an AI model, I want to track unusual pattern deviation

Feature 6: Live Alert Notifications

● Description: Sends real-time alerts to the proctor’s dashboard when a high-risk


behavior is detected.
● User Story: US011 - I want to get live alerts for suspicious activities as a proctor.

2.2.2.7 Assumption

1.Users Will Give Camera and Microphone Access As part of our responsibility we expect that
students will grant webcam and mic access when they first commence the assessment. That is
crucial for the real-time face tracking, gaze detection, and audio anomaly and interruption
detection of the system. They are essential to enable these core functionalities in Sprint 2.

2. Exams Will Be Taken in a Stable, Controlled Environment We practice the assumption that
students will take their assessments in a reasonably quiet, private place, where the webcam can
get a good view of the candidate’s face and ambient noise can be clearly heard by the system.
Very dark rooms or loud environments and other people going in and out can all interfere with
the accuracy of the monitoring tools.”

3. AI Models Have At Least a Baseline Behavioural Baseline The anomaly detection engine
must pretend that it has some prior data (or default patterns) to compare to, with normal exam
behavior. While this system will continue to learn over time, it also has to assume that there is
behaviour baseline, so initial flagging and detection logic can be derived.

4. Overall, proctors should be able to review alerts in real-timeThe system is built to offload
some of the manual burden of the proctor however it must expect that the proctor (human)
exists to scan alerts, examine live stream of high risk flags, and carry out actions when
necessary.

5. All Sensitive Data is Treated Securely and Respectfully. The sprint assumes that all
monitoring data, such as facial images, audio wave forms, and alert logs, have been encrypted
and handled in keeping with privacy standards (eg, GDPR). This sprint also assumes that data
has not been maintained indefinitely unless needed for review or legal purposes.

30
6. The Real-Time Communication Channels are StableSocket-based alerts and live dashboards
require stable client-server communication. This sprint assumes that the exam sessions are
being completed under reasonable internet connectivity and latency, so real-time alerts can
happen without delay.

7. Students are not notified of each detection in real timeTo avoid anxiety, or manipulation of
the monitoring process, it is assumed that students will not receive alerts for every potential
detection the system makes. Alerts will instead go to the proctor, who can decide whether to
take action on the alert, or simply dismiss.

2.2.3 Architecture Document


Updated systems components

1.Monitoring Engine:

● Tracks directional gaze, facial features, or presence of an additional face from webcam
feed.
● Utilizes TensorFlow.js and MediaPipe for web-based processing.

2.Audio Analysis Module:

● Real time mic input detection and analysis.


● Flags suspicion based on unapproved tone of conversation and noise patterns.

3.ML-Based Behavior Detector:

● Compares live behaviour to pre-trained behaviour baseline to look for outliers or


abnormal conditions present in test-taking behaviour.

4.Real-Time Alerting System:

● Built using Socket.IO to deliver alert notifications to the proctor interface in real time.
● Notifications are stored with a timestamp and a brief log of the notification.

5.Privacy & Security Features:

● Enables AES-256 encryption on loop for all data streams during exam.
● No raw audio or video will ever be retained, only encrypted clipboard metadata and
snaps will be retained.

31
● On-device processing provides the least lag and maximized privacy on aspects of the
workflow.

Data Flow

1.Student Exam Session:

● Facial and audio feed are monitored on-device.


● If flagged by the monitoring engine for a potential cheating event (multiple faces
detected or other anomalies), the real-time flag notification is sent to the server.

2.Server Session:

● The flag is logged into server database.A notification is sent in the form of context to
the assigned proctor (time stamp, type of flag, and level of risk).

3.Procter Interface:

● Displays a live stream of students available.


● Flags students that have an active violations in a clear way that does not disrupts the
test.

32
2.2.4 UI DESIGN

Figure 2.13 Student Dashboard displays

33
2.2.5 Functional Test Cases

Table 2.5 Detailed Functional Test Case

34
2.2.6 Committed Vs Completed User Stories

Figure 2.14 Bar graph for Committed Vs Completed User Stories

35
2.2.7 Sprint Retrospective

Table 2.6 Sprint Retrospective for the Sprint 1

2.3 Sprint 3
2.3.1 Sprint Goal with User Stories of Sprint 3
The following table 2. represents the detailed user stories of the sprint 3

Table 2.7 Detailed User Stories of sprint 3

S.No Detailed User Stories


#US1 As a proctor, I want to view multiple students in real-time, so that I can
oversee exams more efficiently and respond to issues as they happen.

#US2 As a proctor, I want to manually review flagged incidents, so I can make a


final decision about whether the behavior was truly suspicious behavior.

36
#US3 As a student, I want the system to automatically reconnect after an internet
drop, so I do not lose progress on my exam or get unfairly penalized.

#US4 As an admin, I want to produce reports of all flagged activity during the exam,
so I can provide summaries to instructors, institutions, or compliance officers

#US5 As a user, I want to modify my interface settings (i.e. turning on dark mode,
changing font size), so I can stay focused and comfortable during exam time.

Planner Board representation of user stories are mentioned below figures 2.

37
Figure 2.15 User Stories for Proctor Dashboard

38
Figure 2.16 User Stories for Flag Review and Manual Control

Figure 2.17 User Stories for Session Recovery

39
Figure 2.18 User Stories for Report Exporting

Figure 2.19 User Stories for Customizable Exam UI

40
2.3.2 Functional Document
2.3.2.1 Introduction
The final and most consequential development part of ProctorPro is Sprint 3. Building on the
security in Sprint 1 and integrating intelligent monitoring in Sprint 2, Sprint 3 will focus on
scaling the features, optimizing the user experience, and preparing the platform for use in live
exams. Our focus will be to build a real-time proctoring dashboard, manual review if an
incident is flagged, seamless session recovery if the exam drops, and reporting exam integrity.

The features delivered in Sprint 3 were intended to give instructors control during a large-scale
remote exam and to give the students a seamless and transparent experience.

2.3.2.2. Product Goal

The main focus of Sprint 3 is to make the system completely functional in exam scenarios. This
includes:

● Allowing proctors the ability to monitor many students all at once


● Choosing to be able to have manual tools to conduct reviews of flagged behavior.
● Allowing sessions to be recovered if the network alert went down.
● Providing integrity reports that can be given to institutions or students.
● Improving the user interface for better accessibility/clarity.

In the end, the main goal of this sprint is to make ProctorPro deployable, with functionality that
is powerful, yet practically applicable.

2.3.2.3. Business Processes

This sprint brings several real time workflows for live exams:

● Live Proctoring Dashboard: Proctors have access to a grid view of all students, and are
alerted visually to any suspicious activities
● Incident Review + Management: Proctors can click on a flagged alert, review audio or
video snippet evidence, and mark it reviewed, dismissed, and/or escalated for review
or restoration of a session later.
● Session Recovery: If students are disconnected for some reason, the system will
automatically re-authenticate for them and return them to their live session.

41
● Report Generation: Once the exam is over, proctors and admins will be able to export
a full report which includes, but does not limit itself to, time stamped alerts, proctor
notes, and system recommendations.
● User interface and experience updates: Students will have a clean user interface that is
focused, with an optional dark mode, live status indicators at various levels on the
screen, and the least distractions possible.

2.3.2.5. Features

Feature 1: Live Proctoring Panel

● Description: Proctors can see all active exams on a single dashboard, with real-time
updates, alerts, and live video previews.
● Purpose: To give proctors a view from the top; manageable for short-staffed and
strained teams.

Feature 2: Manual Review Panel

● Description: Proctors can review and confirm flagged alerts, view short video clips,
leave comments, and override if necessary.
● Purpose: To ensure fairness and give the proctor ultimate power over determining
suspicious behavior.

Feature 3: Auto Reconnect for Session Continuity

● Description: If a student is disconnected, the system retains the session state, and
resumes when they reconnect.
● Purpose: To support students with unstable or poor internet and mitigate accidental
disqualifications.

Feature 4: Report Downloading

● Description: Admins can create and download integrity reports in PDF or CSV format
that captures a summary of alerts, timestamps, and outcomes.
● Purpose: To keep them from being found out, but for transparency and documentation
for audit or academic review.

42
Feature 5: Final UI Improvements and Accessibility features

● Description: Final UI improvements, such as responsive design, dark/light modes,


accessibility settings, and enhanced exam-contingent indicators.
● Purpose: To help minimize cognitive load when working under high pressure during
exams; user-friendly, family-friendly, accessible, calming to the eye.
2.3.2.7. Assumptions

1. Stable Internet is Often Available for All Users

We assume that the vast majority of students will have rather stable internet. Although the
platform has session auto-reconnect, frequent or longer disconnects could still affect the
continuity of their exam or the accuracy of the monitoring process.

2. Proctors Are Trained in Understanding and Using the Live Dashboard and Review.

We assume that instructors or proctors using the system have had basic training or an
onboarding session. They will understand how to interpret alerts, how to use the manual review
panel, and how to create reports.

3. Storage and Backend Resources Are Scalable As Needed

We assume that the backend infrastructure is capable of handling and scaling up the storage of
temporary video snapshots and logs, usually expected to have the most storage capacity during
high-volume and multiple concurrent user exam sessions.

4. Alert Thresholds Are Tuned Based on Prior Feedback

It is assumed that the system has been trained, or configured with data from Sprint 2, to reduce
false positives in order to enhance positive matches. The alert thresholds for gaze deviation,
audio issues, and multiple faces alert have already been set.

5. Institutions Accepting Automated Reports For Review

We assume that academic institutions or exam boards feel comfortable accepting integrity
reports generated by the system (time stamped logs, summaries of alerts raised, comments
written by proctors, etc.) as valid supporting documents.

43
6. UI Customization for Focus not Avoidance

We envision options to toggle features such as themes or text size to be used appropriately by
students—for accessibility or comfort. We anticipate that students will not use this to hide
notifications or avoid supervision.

7. AI + Human Coordination Work Best

We envision this system of operation being a hybrid model—AI detection of anomalies and
human final decisions about events. The AI was never designed to be a judge and jury; it's a
smart assistant.

8. Exams Have a Fixed Format (Timed and Scheduled)

We assume that the exams can be scheduled and have a start and end time, not be open ended
or rolling exams about which students create exam schedules. This is important for being able
to track, synchronize, and even generate logs.

2.3.3 Architecture Document


2.3.3.1. Application

This sprint finalizes the platform’s architecture by focusing on scalability, performance, and
modular communication between services.

Main components of the system and improvements

1.Real-Time Monitoring Dashboard (Proctor Side)

● Websocket powered for streaming video and alert experience without refresh
● Proctors can apply filters for alerts by severity or student ID.

2.Incident Review Module

● Connects to a backend that stores encrypted event snapshots.


● Supports comments, labels (like Confirmed, False Positive), and decision tracking

3.Session Recovery Handler

● Checks for dropped sessions.


● The connection is re-established automatically and the student is re-authenticated (re-
validated) using their biometric profile.

44
4.Reporting Engine

● Formats exam data into human readable formats.


● It exports time-stamped incident logs, comments, and decisions.

5.UI/UX Engine

● Frontend because it is lightweight and responsive Rocket, Render alerts and


instructions for users condition based.
● Support for high-contrast themes and keyboard navigation.

Data Flow Overview

1.Student Side:

● Localized AI (gaze, face, audio) → Notifies only on exceptions to the backend.

2.Server Side:

● Get alerts → Save expression → Update proctoring Dashboard.

3.Proctor Side:

● Views live dashboard → Clicks on reviews alert → Confirms, dismisses, or

comments → System logs decision

4.Reporting:

● At the conclusion of the session, reports are produced and downloadable.

Figure 2.20 Architecture Diagram

45
2.3.4 UI DESIGN

Figure 2.21 Student Dashboard

46
2.3.5 Functional Test Cases

Table 2.8 Functionality test case for sprint 3

47
2.1.7 Committed Vs Completed User Stories

Figure 2.22 Bar graph for Committed Vs Completed User Stories

2.1.8 Sprint Retrospective

48
Table 2.9 Sprint Retrospective for the Sprint

49
CHAPTER 3
RESULTS AND DISCUSSION
3.1 Project Outcomes

Fig 2.23 Teacher Dashboard

Fig 2.24 Exam Management

50
CHAPTER 4
CONCLUSION & FUTURE ENHANCEMENTS

After many long months of planning, coding, testing, so much learning, and sometimes even
re-planning, we were ecstatic to finally launch ProctorPro.

And finally, here we were, with our own software package. But it wasn't just any software
package. We were responding to an urgent and reality-based issue in education: How do we
make online exams feel as legitimate as in-person exams? How will we create something that
preserves the integrity of academic honesty without making the exam feel cold and robotic?

With each sprint, each stand-up meeting, and each bug we squashed, we realized we were one
step closer to answering that question.

Sprint 1 we concentrated on doing things the right way. We needed to do the right thing for the
students: registration, facial recognition onboarding, roles. We were conscientious of the
engagement and safety of the students. We wanted student engagement to be high, while still
being simple enough to not overwhelm them. We focused on flows, design, and explaining to
students what action the software was taking and why.

This is where Sprint 2 gets smart. Literally. This was intelligence—ProctorPro watched,
learned and alert. Gaze tracking, audio monitoring, multiple face detection—each was tested
not just to see if they worked technically, but to see if they were fair. We were intent on ensuring
students did not feel judged. We were intent to support the honest and discourage dishonest
behaviour, without sounding like Big Brother.

Then it was on to Sprint 3—the last lap. The polish, the dashboard, the report output, the
resiliency. This is where everything came together as a live ecosystem. Proctors were watching
students in real time, they were reviewing student behaviours, they were exporting clean
reports, and students were able to recover from dropped Internet connections without alarm.
We felt we had built something of real utility.

Our pride doesn’t stem from the features or tech stack; the pride always comes from the
philosophy behind the platform. We built ProctorPro with empathy, we asked how it would be
to be watched and how it would be to do the watching. We sought to bridge that emotional
chasm, using respect in the design, transparent alerts, and intelligent assistance, not judgment.

51
This was project that taught us a lot. About AI, about ethics, about working with teams, and
about what it takes to really ship something of value that solves real problems using technology.
It made us better developers, better collaborators, and, honestly, better listeners.

For us, ProctorPro isn't the end of a project; it's the start of a process to build technology that
feels like being human. And it's what we hope it will inspire, wherever it goes next.

FUTURE ENHANCEMENTS
1. Voice Biometrics for Identity Verification -

● The Importance - Currently, ProctorPro currently identifies students by visually


identifying their face using facial recognition. However, in oral exams, interviews, and
assessments where audio is used, it would be even more impactful to additionally verify
the student's voice piece.
● How this helps - Voice biometrics would take a student's speech patterns and tone,
comparing against their already registered voiceprint to confirming their identity. A
great added layer of identification verification, especially valuable in viva exams or oral
quizzes.
● Benefits for students - Reduced stress in exams knowing they don't have to look directly
at the camera all the time because the system knows it's them.
2. Adaptive AI That Learns Individual Behaviours

● Why it is important: Currently, Proctor Pro uses fixed thresholds for flagging student
behaviours (eg: looking away for too long). However, not all students behave in the
same way.
● How it works: An adaptive AI would learn over time, a student's natural habits. For
instance, one student may look away when they are in thought, while another student
may softly say their questions aloud. Instead of continuously auto-flagging these
behaviors, the system would learn what is normal for them.
● Student benefit: Less false alerts. Builds trust. Monitoring feels more tailored to the
student rather than a constant judgement.

3. Compatibility with Mobile and Tablet Devices

● Why It's Important: Many students, particularly in developing regions, lack laptops.
But they have smartphones.

52
● How it Helps: Making ProctorPro mobile-compatible would allow students to take
exams from their phones or tablets, while retaining most of the core components such
as face tracking, alerts, and exam integrity.
● Benefit for Students: Access to exams for everyone—regardless of devices available or
economic circumstances.

4. Integration with Learning Management Systems (LMS)

● What this means: Currently students and instructors will need to switch between
ProctorPro and the tools they already use e.g. Moodle, Google Classroom, or Microsoft
Teams.
● How that helps: Deeper integration with your LMS would let instructors set and observe
an exam from their current tools. Students could launch ProctorPro immediately -
without logging onto another system.
● Benefits to instructors: Smoother working. Lesser technical friction. Simple adoption
across schools and colleges.
5. Multi-Language and Cultural Adaptation

● Why it matters: ProctorPro may be used globally, by students from different cultures,
backgrounds, and language groups.
● How it helps: Providing multi-language support, and adapting behavioural thresholds
to accommodate cultural differences would increase the inclusivity of the platform and
mitigate misunderstandings. For example, in many cultures, avoiding direct eye contact
is respectful - not suspicious.
● Student Benefit: Less anxiety. Better understanding. A platform that is designed for
them.

6. Log Security with Blockchain Technology

● Why it is important: In high stakes exams, any doubt about logs or reports tampering
will compromise the whole exam process.
● How it helps: With blockchain, all logs (face match logs, alerts, timestamps) would be
immutable. They would be locked after the exam. This is perfect for competitive exams,
government exams, certifications.
● Institutional benefit: Legal-grade evidence and audit trails. Better credibility

53
7. Live Proctor-Student Communication

● Why it matters: There are times when students don’t even notice something is wrong.
Perhaps their webcam slipped, or they leaned too far out of frame.
● How it helps: A chat or voice notification could be built-in to enable the proctor to say,
“Please adjust your camera,” or “You are partially out of frame,” without stopping the
exam or generating a report.
● Student benefit: Again, instant feedback rather than passive flags. Feels human rather
than an imposing figure.

8. Gamification and Positive Reinforcement

● Why it is important: Monitoring can be daunting. Students can feel like they are being
scrutinized.

● How it helps: Gamifying the experience with “Clean Session” badges or recording
“Focus Streaks” can boost honest behavior in a fun way. Rewards (not just
punishments) can change behavior.
● Student advantage: Motivation, not fear. A relaxed exam climate that still encourages
integrity

54
CHAPTER 5
REFERENCE

[1] A. Smith and B. Jones, “Biometric Methods in Remote Proctoring Systems,” IEEE Trans.
Educ., vol. 10, no. 2, pp. 104-112, March 2020.

[2] C. Lee, D. Kumar, and F. Patel, “Machine Learning Approaches for Detecting Anomalous
Behavior in Online Exams,” in Proc. IEEE Int. Conf. on E-Learning, 2021, pp. 235-240.

[3] Ahmed, M., and Parvez, M. (2023). AI-driven proctoring systems for online assessments:
A comparative study. Journal of Educational Technology, 15(2), 112-130.

[4] Kumar, R., and Singh, P. (2022). Deep learning approaches for automated exam proctoring.
IEEE Transactions on Learning Technologies, 14(4), 295-308.

[5] Patel, S., and Dey, A. (2021). Ensuring integrity in online exams: AI-based proctoring
methods. International Conference on E-Learning Innovations, 2021.

[6] Chen, L., and Wu, Y. (2020). A hybrid AI-human model for effective online exam
supervision. Journal of Artificial Intelligence in Education, 28(3), 223-241.

[7] Hassan, R., and Al-Mutairi, A. (2023). Ethical implications of AI-based proctoring in online
education. Computers and Education Review, 45, 78-92.

[8] Johnson, T., and Parker, M. (2022). Security and fairness concerns in automated online
proctoring. Cybersecurity in Education Journal, 9(1), 102-119.

[9] Sanchez, R., and Gomez, A. (2023). Real-time gaze tracking and behavioral analysis for
online exam monitoring. International Journal of Computer Vision and Education, 17(1), 56-
72.

[10] Li, Z., Wang, J. (2021). Deepfake detection for AI-based online proctoring. Journal of AI
Ethics and Security, 6(2), 88-104.

[11] Brown, K., Adams, C. (2022). The role of machine learning in adaptive online proctoring.
Advances in AI Education, 11(4), 137-152.

[12] Miller, D., Roberts, E. (2021). Addressing bias and fairness in AI-driven online exam
proctoring. AI Society Journal, 36(3), 309-325.

55
[13]Chaturvedi, K., Gupta, R., and Tripathi, R. (2021). AI-based Online Exam Proctoring
System. International Journal of Educational Technology, 18(2), 55-70.

[14]Mukherjee, S., & Sharma, R. (2023). Machine Learning Techniques in Remote Exam
Proctoring: A Comparative Study. IEEE Transactions on Learning Technologies, 16(1), 112-
128

[15]Patel, N., and Singh, A. (2022). Enhancing Academic Integrity Using AI-Based Proctoring.
Journal of E-Learning Research, 12(3), 45-63

[16]Zhou, J., and Wang, Y. (2023). Deep Learning for Cheating Detection in Online Exams.
Neural Networks and Learning Systems, 29(2), 77-92

[17]Kumar, D., and Raj, P. (2021). Evaluating Automated Proctoring Systems for Large-Scale
Online Assessments. Journal of Digital Education, 15(2), 88-105

[18]Lee, S., and Kim, H. (2022). Ethical Considerations in AI-Based Online Proctoring.
Education & Ethics Journal, 19(3), 35-52.

[19]Chen, X., and Li, Z. (2023). Privacy Issues in AI-Powered Remote Exam Proctoring. IEEE
Privacy & Security Conference, 4(2), 98-114

[20]Bhatia, R., and Agarwal, P. (2021). Facial Recognition and AI in Proctoring: Challenges
and Solutions. Computers & Education Journal, 10(2), 67-83.

[21]Gomez, F., and Rivera, M. (2023). AI in Remote Exam Proctoring: A Human-Centered


Approach. International Journal of Educational Technology, 24(1), 112-129.

[22]Jain, S., & Mehta, K. (2022). Exploring Biometric Authentication for Online Proctored
Exams. Computers in Human Behavior, 15(4), 23-41

[23]Gupta, P., & Sharma, L. (2022). AI-Driven Plagiarism Detection in Online Exams.
Advances in Artificial Intelligence, 17(2), 75-94.

[24]Miller, D., & Williams, R. (2021). Bias in Automated Proctoring Systems: A Critical
Analysis. Journal of AI Ethics, 16(3), 112-134

56
Appendix

57
B . Sample codes

58
59
60
61
62

You might also like